2025
Militancy⚑
-
New: Add 38c3 talk on state of surveillance.
38C3 Talk: State of Surveillance: A year of digital threats to civil society
-
New: Anticolonialism in technology.
Art - Decolonizing technology
Articles - Shanzhai: An Opportunity to Decolonize Technology? by Sherry Liao - Técnicas autoritarias y técnicas democráticas by Lewis Mumford (pdf)
Books
- Race after technology by Ruha Benjamin
- The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World by Antony Loewenstein
- Frantz Fanon books
- Hacking del sé by Ippolita
- Tecnologie conviviali by Carlo Milani
- Descolonizar y despatriarcalizar las tecnologías by Paola Ricaurte Quijano
Research
Talks
-
New: Añadir poema sobre la falta de acción.
Maravilloso poema de alemán Martin Niemöller sobre la cobardía de los intelectuales alemanes tras el ascenso de los nazis al poder y la subsiguiente purga de sus objetivos escogidos, grupo tras grupo.
«Cuando los nazis vinieron a llevarse a los comunistas, guardé silencio, ya que no era comunista, Cuando encarcelaron a los socialdemócratas, guardé silencio, ya que no era socialdemócrata, Cuando vinieron a buscar a los sindicalistas, no protesté, ya que no era sindicalista, Cuando vinieron a llevarse a los judíos, no protesté, ya que no era judío, Cuando vinieron a buscarme, no había nadie más que pudo protestar». -
New: Oslo advertisement on the importance of the symbols.
-
New: Añadir pantube.
Pantube: YouTube de izquierdas
-
New: Add rich don't work representation.
The rich don't work: game to get a grasp of how much money they make without doing anything
-
New: Añadir podcast sobre la Operación Gladio.
No es el fin del mundo: 46. ¿Qué fue la Operación Gladio?: Para pasárselo a cualquier persona reformista
-
New: Añadir vídeos de David zona zero.
Antifascism⚑
-
New: Añadir un par de artículos interestantes.
-
New: Añadir web sistemapunk.
- Sistemapunk: hace investigaciones sobre grupos nazis
Hacktivism⚑
Mobile Verification Toolkit⚑
-
New: Chaos feminist convention.
Chaos feminist convention (looks like it will be mostly in german)
-
New: Add 38C3 talk from pegasus to predator.
38C3 talk: From Pegasus to Predator - The evolution of Commercial Spyware on iOS
Collectives⚑
-
New: Remember these useful things.
Use c3lingo for german talks
There are many talks in german, this may deter you from attending them, but don't worry, if they are hosted in the main tracks you can hear the translation in english live.
You can access them here, they work best with the ccc wifi.
At Chaos Mentors, they connect experienced mentors with first-time visitors of Congress. Their goal is to make Congress more inclusive by supporting those who might not usually attend, including people with special needs.
-
If you're lost on what shifts you might want to take here is the ones I've found easy to do: - Talk access shifts: You will be on the access of the talks making sure that the doors are opened and closed when it should and that people do not linger in the corridors, whenever you want to see a talk in one of the main tracks take these shifts as you'll be able to watch it fine and also help the organisation. If you know for sure you'll be attending one of these talks, reserve the shift the day before as they usually fly away soon. - Standby angel: You just stay in heaven for a couple of hours waiting for someone to tell you what you need to do. It's a good shift to know people and to find your place in heaven. If you know for sure you'll have a free slot between talks and workshops, reserve the shift the day before as they usually fly away soon. - At teardown: If you can stay after the congress has ended, help in the tear down, I first go to the assemblies that have attracted me the most and then go to heaven to see what else is needed.
-
New: Add critical switch.
Critical Switch: una colectiva transhackfeminista no mixta1 interesades en la cultura libre, la privacidad y la seguridad digital. Promovemos la cultura de la seguridad para generar espacios más seguros en los movimientos sociales y activistas.
-
New: Add méxico collectives.
Anticapacitismo⚑
-
New: Añadir proyecto artefactos.
https://www.artefactos.org: son gente de tecnología y familias de críos con distintas necesidades, colaborando juntos. Hacen prototipos en 3D que te puedes imprimir
Feminism⚑
- New: New References.
-
New: Add good movies and tv shows to discuss practical feminism with teenagers and adults.
Palestine⚑
-
New: Add Al Jazeera documenaries.
A la luz de los acontecimientos actuales en Palestina, un gran número de cineastas han puesto sus películas sobre Palestina a disposición en línea de forma gratuita.
Están en árabe y no tienen subtítulos así que no se puede bajar, pero si se pueden ver directamente en youtube con los subtítulos autogenerados.
- Una colección de documentales publicada por Al Jazeera Documentary: 1, 2, 3
- El documental "Guardián de la memoria"
- El documental "Un asiento vacío"
- El documental "El piloto de la resistencia"
- El documental "Jenin"
- El documental "El olivo"
- El documental "Escenas de la ocupación en Gaza 1973"
- El documental "Gaza lucha por la libertad"
- El documental "Los hijos de Arna"
- El cortometraje "Strawberry"
- El cortometraje "The Place"
- El documental "El alcalde"
- El documental "La creación y la Nakba 1948"
- El documental "Ocupación 101"
- El documental "La sombra de la ausencia"
- El documental "Los que no existen"
- El documental "Como dijo el poeta"
- El documental "Cinco cámaras rotas"
- El largometraje "Paradise Now"
- El cortometraje "Abnadam"
- El largometraje "Bodas de Galilea"
- El largometraje "Kofia"
- El largometraje documental "Slingshot Hip Hop"
- El largometraje documental "Tel Al-Zaatar"
- El largometraje documental "Tal al-Zaatar - Detrás de la batalla"
- El documental "In the Grip of the Resistance"
- El documental "Swings"
- El documental "Naji al-Ali es un artista visionario"
- El documental "La puerta superior"
- El largometraje documental "En busca de Palestina"
- El largometraje "La sal de este mar"
- El largometraje documental "Hakki Ya Bird"
- La serie "Palestina Al-Taghriba"
- La serie "Yo soy Jerusalén"
Detección de estupas⚑
-
New: Malditos estupas.
Los estupas (o infiltrados) son de los peores cuerpos de represión del estado. No sólo porque extraen información de los colectivos sino porque generan un ambiente de desconfianza y miedo que muchas veces es paralizante o incluso desarticula los propios colectivos. De la peor escoria de la sociedad...
Por desgracia, en el estado español está bastante de moda. Por suerte no estamos desamparadas, varias compañeras están generando materiales para familiarizarnos con este problema. Desde el documental Infiltrats hasta el Manual para destapar a un infiltrado (puedes ver el artículo del salto sobre el manual).
Probablemente estos desgraciados adaptarán sus modos de hacer para pasar estas detecciones, pero al menos los movimientos sociales ya tenemos una base formal sobre la que trabajar.
Referencias
Conflicto⚑
-
New: Añadir notas sobre el conflicto desde un punto de vista antipunitivista.
Pensamientos sueltos sobre la visión del conflicto desde un punto de vista antipunitivista
- Dejar de ver los conflictos como una batalla, es una oportunidad de transformación
- Los conflictos deben de ser resueltos en colectivo siempre que se pueda
- Si se veta a un pavo por comportamientos machistas sólo estás trasladando el problema. Seguirá pululando por diferentes colectivos hasta que arraigue en uno más débil y lo torpedeará
- Es difícil de ver el límite entre lo terapéutico y lo transformativo
- Cuál es la responsabilidad colectiva de la transformación de una persona?
- Nos faltan herramientas para:
- la gestión de conflictos en general
- la gestión de conflictos físicos en particular
- el acompañamiento a ambas partes de un conflicto
- el acompañamiento a una persona agresora
- es todo lo que me produce malestar violencia?
- Cada situación es tan particular que los protocolos no sirven. Es mucho mejor someternos en colectivo y a menudo a situaciones de conflicto y generar desde esa práctica las herramientas que nos puedan servir, de manera que en el momento de la verdad salgan de manera intuitiva.
Referencias
Películas
Libros
Series
Podcast
- El marido (Ciberlocutorio)
- El cancelado (Ciberlocutorio)
- Antipunitivismo con Laura Macaya (Sabor a Queer)
- Procesos restaurativos, feministas y sistémicos (Fil a l´agulla en el curso de Nociones Comunes "Me cuidan mis amigas")
Artículos
- Con penas y sin glorias: reflexiones desde un feminismo antipunitivo y comunitario:
- Expulsar a los agresores no reduce necesariamente la violencia:
- Antipunitivismo remasterizado
- Reflexiones sobre antipunitivismo en tiempos de violencias
- Indispuestas. Cuando nadie quiere poner la vida en ello
- La deriva neoliberal de los cuidados
- Justicia transformativa: del dicho al hecho
- Las malas víctimas responden
Otras herramientas
Anarchism⚑
Memoria histórica⚑
Conflicto vasco⚑
-
New: Añadir referencias sobre el conflicto vasco.
- Tercera temporada de (de eso no se habla): "Se llamaba como" yo es una serie documental sobre la memoria de la niña Begoña Urroz, sobre las cinco décadas de silencio de una familia… Y sobre el ruido que lo rompió.
Laboral⚑
-
New: Añadir referencia a la novela Tierra de la luz.
- No hay negros en el Tíbet: Episodio 47 - Lucía Asué Mbomío: Presentan la novela "Tierra de la Luz" de Lucía Asué MbomÍo. Un relato que pone el foco en los temporeros del sur que trabajan en los invernaderos en unas condiciones durísimas, y las injusticias que se viven en el campo. Una novela “bajo plásticos”, cargada de emoción, crítica y toques de realismo mágico.
Trabajadoras del hogar⚑
-
New: Introducir investigación sobre las trabajadoras del hogar.
Nota: para nada soy un experto en este tema, estas son las claves que he ido deduciendo a través de hablar del tema con trabajadoras y gestoras. Así que verificar todo antes de tomarlo por verdad!
Cuidado de personas mayores dependientes
El cuidado de las personas mayores es una movida, especialmente cuando empiezan a ser dependientes. Normalmente necesitan cuidados la mayor parte del tiempo del día. Actualmente existen las siguientes opciones para impartir dichos cuidados:
- La red cercana (normalmente las mujeres de la familia) se encargan de dichos cuidados.
- Parte o todos los cuidados se externalizan ya sea a una residencia de ancianos, centros de día o contratando a trabajadoras que acuden al hogar.
En este mundo podrido donde los servicios públicos están siendo desmantelados, la oferta pública de centros de día o residencias es insuficiente y generalmente en manos de políticas incompetentes (nunca olvidemos las 7291 muertes que pesan sobre los hombros de Ayuso {hija de puta!}).
Esto sumado a que las mujeres de la familia ahora trabajan y la precarización del sector de las empleadas del hogar hace que (sobre todo peña que tiene pasta) recurra a contratar trabajadoras en régimen de interna.
Trabajo del hogar en régimen de interna
En realidad este trabajo es esclavismo encubierto bajo una pátina legal. Aprovechandose de que la profesión está feminizada y generalmente por personas migrantes, se imponen unas condiciones laborales que no cumplen el estatuto de las trabajadoras.
Con un salario que normalmente no supera el mínimo estas trabajadoras:
- Trabajan muchas más horas que las 40 a la semana
- Hacen trabajos por fuera de su contrato como limpiar la casa o servir la comida.
- Se encuentran encerradas en su lugar de trabajo. Incluso cuando "dejan de trabajar" están dentro del control de sus empleadores.
- Al encontrarse solas con las personas a las que cuidan, 24 horas al día, es común encontrarse casos de violencia de género. Muchas cuentan testimonios de atrancar la puerta de su cuarto por la noche.
- Los espacios que se les ceden (habitaciones o cuartos de baño) no se respetan y generalmente son usados por otras personas de la familia cuando lo desean, arrebatándoles incluso su cuarto propio.
- Generalmente tienen unas dos horas al día para librar. Pero generalmente trabajan en barrios muy lejanos de su hogar, con precios y oferta de ocio muy lejana a sus posibilidades, así que normalmente usan esas horas para pasear. En invierno se pone más complicado con el frío y la lluvia.
- Las que libran los fines de semana tienen que pagar una habitación o piso que sólo pueden disfrutar unos pocos días a la semana.
- Tienen que soportar el maltrato y la tiranía propia de las personas mayores que ya empiezan a perder la cabeza. A esas edades se exacerban el clasismo y el racismo a la vez que desaparecen los mecanismos de control propio y filtro. Lo que genera situaciones muy desagradables que muchas veces desembocan en maltrato psicológico y físico.
Y aunque esto es harto conocido por la sociedad, es un modelo que se sigue usando con frecuencia.
Horario de trabajo
En algunos casos las trabajadoras libran el fin de semana, 36 horas consecutivas según la ley, lo que podría ser de sábado a las 9:00 hasta el domingo a las 21:00, además de 2 horas al día (no remuneradas) en los días de entre semana. Esto hace un total de 122 horas trabajadas a la semana, mucho mayor de las 40 horas establecidas.
Y aunque en teoría los empleadores están obligados a implementar un sistema para registrar la jornada de sus trabajadoras se torna difícil en la práctica
La ley además establece que además de las 40 horas semanales se pueden tener 20 horas extras de presencia. Las horas de presencia se pagan a precio de hora normal porque el régimen de empleadas de hogar lo establece así. No es como otros convenios.
La mayor parte de las trabajadoras no conoce que tienen derecho a estas 20 horas adicionales. Esas horas de presencia se pueden reclamar por los últimos 12 meses, las anteriores prescriben. Esto pueden ser unos 18.000 euros a reclamar. Para ello hacen falta pruebas de que la trabajadora está haciendo esa jornada. Una manera de pelearlo es pedir al empleador que justifique qué otras personas tiene contratadas para cuidar a la persona dependiente. Porque la prueba dentro de un domicilio es muy difícil. Si en el contrato no figura ningún horario se puede asumir que es 24h. También se puede preguntar a vecinos o si está empadronada.
Lo que si es claro es que la empresa tiene que definir el horario de trabajo en el contrato, lo que no siempre hacen.
Pernocta
En el contrato tiene que figurar si la trabajadora duerme en el lugar de trabajo.
Es muy difícil de regular las veces que se despiertan en la noche, así que pelear eso es aún complicado. Aunque se van haciendo avances.
Violencia y acoso en el empleo doméstico
Según el Real Decreto 893/2024 a ojos de noticias.juridicas.com:
El abandono del domicilio ante una situación de violencia o acoso sufrida por la persona trabajadora no podrá considerarse dimisión ni podrá ser causa de despido, sin perjuicio de la posibilidad de la persona trabajadora de solicitar la extinción del contrato en virtud del artículo 50 ET y de la solicitud de medidas cautelares en caso de formulación de demandas, de conformidad con la LRJS.
Salario
Lo normal es que se pague el salario mínimo, aunque Cuatro de cada diez ni llegan a eso. Hay que tener en cuenta que el salario mínimo ha sido actualizado desde enero de 2025 a 1383 euros. Es probable que a muchas ni se lo suban.
Si se cuentan las 20 horas de presencia, el salario serían aproximadamente 2000 euros al mes (2094 según la tabla salarial de senda de cuidados de 2024 para un régimen de 6 noches a la semana).
Pelear por sus derechos*
Normalmente aunque les cuentes todos los derechos que tienen, las trabajadoras no quieren reclamar ni ejercer sus derechos porque no se atreven. Por miedo a perder el trabajo u otras represalias.
Datos personales
Las asesorías pueden sacar el número de la seguridad social con un nombre y un DNI. Esto se hace para facilitar los trámites. Pero si no lo sabes puede rayarte.
Forma de pago
El empleador es el encargado de hacer la transferencia al trabajador como una nómina, no como una transferencia regular. Ya que si no el trabajador no obtiene las bonificaciones de tener domiciliada la nómina.
Aunque haya agencias de por medio, estas generalmente hacen de intermediarias y sólo un trabajo de asesoría, por lo tanto el contrato se suele hacer con la familia de la persona que es cuidada, y esta es la que ha de hacer el ingreso a la trabajadora. A no ser que la agencia sea una ETT de empleadas del hogar, que en ese caso es la agencia la que las contrata directamente.
Denuncias de inspección de trabajo
Las denuncias de inspección de trabajo en este régimen tienen un recorrido diferente dependiendo de a qué inspector le toque, porque como inspección de trabajo no puede entrar en los domicilios particulares por sorpresa aunque sea una empresa. Entonces hay inspectores que las denuncias por maltrato de las empleadas del hogar las meten en el cajón. Otros no, requieren y de más.
Impuestos
En las nóminas de las empleadas del hogar no hay IRPF. Normalmente el salario y la prorrata se desglosa, si está junto es una nómina cutre.
Papeleos
Una vez firmado el contrato, la empresa ha de entregar a la trabajadora la huella digital de su contrato comunicado al servicio público de empleo. Cuando haces un alta de una trabajadora hay que mandar dos ficheros, uno a la tesorería general con el alta y otro al servicio público de empleo estatal con el contrato. Si no lo hacen es un defecto formal, no es gravísimo.
Referencias
Empresas decentes: No todo es horrendo, existen cooperativas de trabajadoras que ofrecen estos servicios bajo unas condiciones que ellas han decidido:
Senda de cuidados publica su tabla salarial en la que te puedes hacer una idea del salario y de los diferentes tipos de régimen de trabajo.
Mejoras legales
- Real Decreto 893/2024, de 10 de septiembre, por el que se regula la protección de la seguridad y la salud en el ámbito del servicio del hogar familiar.
- Real Decreto-ley 16/2022, de 6 de septiembre, para la mejora de las condiciones de trabajo y de Seguridad Social de las personas trabajadoras al servicio del hogar.
Artículos sobre el trabajo interno
Collaborating tools⚑
Aleph⚑
-
Correction: Correct the releases url.
-
New: Add warning for new potential users.
WARNING: Check out the investigative journalism article before using Aleph
-
New: Compare investigative journalism tools.
After reviewing Aleph Pro, Open Aleph, DARC and Datashare I feel that the investigative reporting software environment, as of June 2025, is very brittle and under a huge crisis that will have a breaking point on October of 2025, day where OCCRP will do the switch from Aleph to Aleph Pro.
Given this scenario, I think the best thing to do, if you already have an Aleph instance, is to keep using it until things stabilise. As you won\'t have software updates since October 2025, I suggest that from then on you protect the service behind an VPN and/or SSL client certificate.
I also feel it\'s also a great moment to make a strategic decision on how you want to use an investigative reporting platform. Some key questions are:
- How much do you care of our data being leaked or lost by third parties?
- How much do you trust OCCRP, DARC or ICIJ?
- Do you want to switch from a self hosted platform to an external managed one? Will they be able to give a better service?
- If you want to stay on the self hosted solution. Shall you migrate to Datashare instead of Open Aleph?
- How dependent are you on open source software? How fragile are the teams that support that software? Can you help change that fragility?
- Shall you use the AI for your investigative processes? If so, where and when?
I hope the analysis below may help shed some light on some of these questions. The only one that is not addressed is the AI as it\'s a more political, philosophical one, that would make the document even longer.
Analysis of the present
Development dependent in US government
The two main software are developed by non-profits, Aleph by OCCRP and Datashare by ICIJ, that received part of their funding from the US government. This funding was lost after Naranjitler\'s administration funding cuts:
- OCCRP lost this year 38% of their operational funds: \"As a result, we had to lay off 40 people --- one fifth of our staff --- and have temporarily reduced some of the salaries of others. But there is more. OCCRP has also been funding a number of organizations across Europe, in some of the most difficult countries. Eighty percent of those sub-grants that we provide to other newsrooms have been cut as well.\"
- ICIJ lost this year 8.6% of their operational funds with no apparent effects on the software or the staff.
OCCRP decided to close the source code of Aleph triggering the team split up
With OCCRP decision to close the source of Aleph an important part of their team (the co-leader of the research and data team, the Chief Data Editor and a developer) decided to leave the project and fund DARC.
Although they look to be in good terms. They\'re collaborating, although it could be OCCRP trying to save their face on this dark turn.
These software have very few developers and no community behind them
- Aleph looks to be currently developed by 2 developers, and it\'s development has stagnated since the main developer (`pudo`) stopped developing and moved to Open Sanctions 4 years ago. 3 key members of their team have moved on to Open Aleph after the split. We can only guess if this is the same team that is developing Aleph Pro. If it\'s not then they are developers we know nothing about and can\'t audit.
- Open Aleph looks to be developed by 4 people, 3 of them were part of the Aleph development team until the break up. The other one created a company to host Aleph instances 4 years ago.
- Datashare seems to be developed by 6 developers.
In all projects pull requests by the community have been very scarce.
The community support is not that great
My experience requesting features, proposing fixes with Aleph before the split is that they answer well on their slack, but are slow on the issues and the pull requests that fall outside their planned roadmap. Even if they are bugs. I\'ve been running a script on each ingest to fix an UI bug for a year already. I tried to work with them in solving it without success.
I don\'t have experience with Datashare, but they do answer and fix the issues the people open.
Analysis of the available software
Aleph Pro
The analysis is based on their announcement and their FAQ.
Pros
- As long as you have less than 1TB of data and are a nonprofit it will, for now, cost you way less than hosting your solution
- OCCRP is behind the project
Cons
- They seem to have an unstable, small and broken development team
-
They only offer 1TB of data which is enough for small to medium projects but doesn\'t give much space to grow
-
They lost my trust
There are several reasons that make me hesitant to trust them:
- They don\'t want to publish their source code
- They decided that the path to solve a complicated financial situation is to close their source code
- They advocated in the past (and even now!) that being open-sourced was a corner part of the project and yet they close their code.
- They hid that 52% of their funding came from the US government.
With the next consequences:
- I would personally not give them my data or host their software.
- I wouldn\'t be surprised if in the future they retract on their promises, such as offering Aleph Pro for free forever for nonprofit journalism organizations.
-
You loose sovereignty of your data
Either if you upload your data to their servers or host a closed sourced program in yours, you have no way of knowing what are they doing with your data. Given their economical situation, doing business with the data could be an option.
It could also be potentially difficult to extract your data in the future.
-
You loose sovereignty of your service
If they host the service you depend on them for any operations such as maintenance, upgrades, and keeping the service up.
You\'ll also no longer be able to change the software to apply patches to problems and would depend on them for their implementation and application.
You\'ll no longer have any easy way to know what does the program do. This is critical from a security point of view as introduced backdoors would go unnoticed. It\'s also worrying as we could not audit how they implement the AI. It is known that AI solutions tend to be biased and may thwart the investigative process.
Finally, if they decide to shutdown or change the conditions you\'re sold.
-
I looks like they are selling smoke
Their development activity has dropped in the recent years, they have a weakened team and yet they are promising a complete rewrite to create a brand new software. In an announce that is filled with buzzwords such as AI without giving any solid evidence.
I feel that the whole announcement is written to urge people to buy their product and to save their face. Its not written to the community or their users, is for those that can give them money.
-
They offer significant performance upgrades and lower infrastructure costs at the same time that they incorporate the latest developments in data-analysis and AI
Depending on how they materialise the data-analysis and AI new features it will mean a small to a great increase in infrastructure costs. Hosting these processes is very resource intensive and expensive.
The decrease in infra costs may come from:
- Hosting many aleph instances under the same infrastructure is more efficient than each organisation having their own.
- They might migrate the code to a more efficient language like rust or go
So even though Aleph Pro will require more resources, as they are all going to be hosted in OCCRP it will be cheaper overall.
I\'m not sure how they want to implement the AI, I see two potential places:
- To improve the ingest process.
- To use LLM (like ChatGPT) to query the data.
Both features are very, very, very expensive resource wise. The only way to give those features at the same time as lowering the infra costs is by outsourcing the AI services. If they do this, it will mean that your data will be processed by that third party, with all the awful consequences it entails.
-
They are selling existent features as new or are part of other open source projects
Such as:
- Rebuilt the ingest pipeline: they recently released it in the latest versions of Aleph
- Modular design: The source is already modular (although it can always be improved)
- Enhanced data models for better linking, filtering, and insights. Their model is based on followthemoney which is open source.
-
-
They are ditching part of their user base
They only support self-hosted solutions to enterprise license clients. This leaves out small organisations or privacy minded individuals. Even this solution is said to be maintained in partnership with OCCRP.
-
The new version benefits may not be worth the costs
They say that Aleph Pro will deliver a faster, smarter, and more flexible platform, combining a modern technical foundation with user-centered design and major performance gains. But if you do not do a heavy use of the service you may not need some of these improvements. Although they for sure would be nice to have.
-
It could be unstable for a while
A complete platform rewrite is usually good in the long run but these kind of migrations tend to have an unstable period of time where some of the functionality might be missing
-
You need to make the decision blindly
Even though they are going to give a beta if you request it, I\'m not sure of this, before doing the switch, you need to make the decision beforehand. You may not even like the new software
Datashare
Pros
- ICIJ, a more reliable non profit, is behind it.
- Has the biggest and stable development team
- Is the most active project
- Better community support
- You can host it
- It\'s open source
Cons
- If you have an Aleph instance there is no documented way to migrate to Datashare. And there is still not an easy way to do the migration, as they don't yet use the
followthemoneydata schema. - They won\'t host an instance for you.
Open Aleph
Pros
- You can host it
- It\'s open source
- The hosted solution will probably cost you way less than hosting your own solution (although they don\'t show prices)
- The people behind it have proven their ethic values
- I know one of their developers. She is a fantastic person which is very involved in putting technology in the service of society, and has been active at the CCC.
- They are actively reaching out to give support with the migration
Cons
- A new small organisation is behind the project
- A small development team with few recent activity. Since their creation (3 months ago) their development pace is slow (the contributors don\'t even load). It could be because they are still setting up the new organisation and doing the fork.
- It may not have all the features Aleph has. They started the fork on November of 2024 and are 137 commits ahead and 510 behind. But they could be squash commits.
- Their community forum doesn\'t have much activity
- The remote hosted solution has the same problems as Aleph Pro in terms of data and service sovereignty. Although I do trust more DARC than OCCRP.
Conference organisation⚑
pretalx⚑
-
New: Import a pretalx calendar in giggity.
Search the url similar to https://pretalx.com/
/schedule/export/schedule.xml -
New: Install.
NOTE: it's probably too much for a small event.
The default docker compose doesn't work as it still uses mysql which was dropped. If you want to use sqlite just remove the database configuration.
--- services: pretalx: image: pretalx/standalone:v2024.3.0 container_name: pretalx restart: unless-stopped depends_on: - redis environment: # Hint: Make sure you serve all requests for the `/static/` and `/media/` paths when debug is False. See [installation](https://docs.pretalx.org/administrator/installation/#step-7-ssl) for more information PRETALX_FILESYSTEM_MEDIA: /public/media PRETALX_FILESYSTEM_STATIC: /public/static ports: - "127.0.0.1:80:80" volumes: - ./conf/pretalx.cfg:/etc/pretalx/pretalx.cfg:ro - pretalx-data:/data - pretalx-public:/public redis: image: redis:latest container_name: pretalx-redis restart: unless-stopped volumes: - pretalx-redis:/data volumes: pretalx-data: pretalx-public: pretalx-redis:I was not able to find the default admin user so I had to create it manually. Get into the docker:
docker exec -it pretalx bashWhen you run the commands by default it uses another database file
/pretalx/src/data/db.sqlite3, so I removed it and created a symbolic link to the actual place of the database/data/db.sqlitepretalxuser@82f886a58c57:/$ rm /pretalx/src/data/db.sqlite3 pretalxuser@82f886a58c57:/$ ln -s /data/db.sqlite3 /pretalx/src/data/db.sqlite3Then you can create the admin user:
python -m pretalx createsuperuser
Life navigation⚑
Time navigation⚑
Org Mode⚑
-
New: Footnotes.
A footnote is started by a footnote marker in square brackets in column 0, no indentation allowed. It ends at the next footnote definition, headline, or after two consecutive empty lines. The footnote reference is simply the marker in square brackets, inside text. Markers always start with ‘fn:’. For example:
The Org website[fn:1] now looks a lot better than it used to. ... [fn:50] The link is: https://orgmode.orgNvim-orgmode has some basic support for footnotes.
-
New: Custom agendas.
You an use custom agenda commands
Define custom agenda views that are available through the
org_agendamapping. It is possible to combine multiple agenda types into single view. An example:require('orgmode').setup({ org_agenda_files = {'~/org/**/*'}, org_agenda_custom_commands = { -- "c" is the shortcut that will be used in the prompt c = { description = 'Combined view', -- Description shown in the prompt for the shortcut types = { { type = 'tags_todo', -- Type can be agenda | tags | tags_todo match = '+PRIORITY="A"', --Same as providing a "Match:" for tags view <leader>oa + m, See: https://orgmode.org/manual/Matching-tags-and-properties.html org_agenda_overriding_header = 'High priority todos', org_agenda_todo_ignore_deadlines = 'far', -- Ignore all deadlines that are too far in future (over org_deadline_warning_days). Possible values: all | near | far | past | future }, { type = 'agenda', org_agenda_overriding_header = 'My daily agenda', org_agenda_span = 'day' -- can be any value as org_agenda_span }, { type = 'tags', match = 'WORK', --Same as providing a "Match:" for tags view <leader>oa + m, See: https://orgmode.org/manual/Matching-tags-and-properties.html org_agenda_overriding_header = 'My work todos', org_agenda_todo_ignore_scheduled = 'all', -- Ignore all headlines that are scheduled. Possible values: past | future | all }, { type = 'agenda', org_agenda_overriding_header = 'Whole week overview', org_agenda_span = 'week', -- 'week' is default, so it's not necessary here, just an example org_agenda_start_on_weekday = 1 -- Start on Monday org_agenda_remove_tags = true -- Do not show tags only for this view }, } }, p = { description = 'Personal agenda', types = { { type = 'tags_todo', org_agenda_overriding_header = 'My personal todos', org_agenda_category_filter_preset = 'todos', -- Show only headlines from `todos` category. Same value providad as when pressing `/` in the Agenda view org_agenda_sorting_strategy = {'todo-state-up', 'priority-down'} -- See all options available on org_agenda_sorting_strategy }, { type = 'agenda', org_agenda_overriding_header = 'Personal projects agenda', org_agenda_files = {'~/my-projects/**/*'}, -- Can define files outside of the default org_agenda_files }, { type = 'tags', org_agenda_overriding_header = 'Personal projects notes', org_agenda_files = {'~/my-projects/**/*'}, org_agenda_tag_filter_preset = 'NOTES-REFACTOR' -- Show only headlines with NOTES tag that does not have a REFACTOR tag. Same value providad as when pressing `/` in the Agenda view }, } } } })You can also define the
org_agenda_sorting_strategy. The default value is{ agenda = {'time-up', 'priority-down', 'category-keep'}, todo = {'priority-down', 'category-keep'}, tags = {'priority-down', 'category-keep'}}.The available list of sorting strategies to apply to a given view are:
time-up: Sort entries by time of day. Applicable only in agenda viewtime-down: Opposite of time-uppriority-down: Sort by priority, from highest to lowestpriority-up: Sort by priority, from lowest to highesttag-up: Sort by sorted tags string, ascendingtag-down: Sort by sorted tags string, descendingtodo-state-up: Sort by todo keyword by position (example: 'TODO, PROGRESS, DONE' has a sort value of 1, 2 and 3), ascendingtodo-state-down: Sort by todo keyword, descendingclocked-up: Show clocked in headlines firstclocked-down: Show clocked in headines lastcategory-up: Sort by category name, ascendingcategory-down: Sort by category name, descendingcategory-keep: Keep default category sorting, as it appears in org-agenda-files
You can open the custom agendas with the API too. For example to open the agenda stored under
t:keys = { { "gt", function() vim.notify("Opening today's agenda", vim.log.levels.INFO) require("orgmode.api.agenda").open_by_key("t") end, desc = "Open orgmode agenda for today", }, },In that case I'm configuring the
keyssection of the lazyvim plugin. Through the API you can also configure these options:org_agenda_filesorg_agenda_sorting_strategyorg_agenda_category_filter_presetorg_agenda_todo_ignore_deadlines: Ignore all deadlines that are too far in future (over org_deadline_warning_days). Possible values: all | near | far | past | futureorg_agenda_todo_ignore_scheduled: Ignore all headlines that are scheduled. Possible values: past | future | all
-
New: Load different agendas with the same binding depending on the time.
I find it useful to bind
gtto Today's agenda, but what today means is different between week days. Imagine that you want to load an agenda if you're from monday to friday before 18:00 (a work agenda) versus a personal agenda the rest of the time.You could then configure this function:
keys = { { "gt", function() local current_time = os.date("*t") local day = current_time.wday -- 1 = Sunday, 2 = Monday, etc. local hour = current_time.hour local agenda_key = "t" local agenda_name = "Today's" -- default -- Monday (2) through Friday (6) if day >= 2 and day <= 6 then if hour < 17 then agenda_key = "w" agenda_name = "Today + Work" end end vim.notify("Opening " .. agenda_name .. " agenda", vim.log.levels.INFO) require("orgmode.api.agenda").open_by_key(agenda_key) end, desc = "Open orgmode agenda for today", }, } -
New: Better handle indentations.
There is something called virtual indents that will prevent you from many indentation headaches. To enable them set the
org_startup_indented = trueconfiguration.If you need to adjust the indentation of your document (for example after enabling the option on existent orgmode code), visually select the lines to correct the indentation (
V) and then press=. You can do this with the whole file(╥﹏╥). -
New: Remove some tags when the state has changed so DONE.
For example if you want to remove them for recurrent tasks
local function remove_specific_tags(headline) local tagsToRemove = { "t", "w", "m", "q", "y" } local currentTags = headline:get_tags() local newTags = {} local needsUpdate = false -- Build new tags list excluding t, w, m for _, tag in ipairs(currentTags) do local shouldKeep = true for _, removeTag in ipairs(tagsToRemove) do if tag == removeTag then shouldKeep = false needsUpdate = true break end end if shouldKeep then table.insert(newTags, tag) end end -- Only update if we actually removed something if needsUpdate then headline:set_tags(table.concat(newTags, ":")) headline:refresh() end end local EventManager = require("orgmode.events") EventManager.listen(EventManager.event.TodoChanged, function(event) ---@cast event OrgTodoChangedEvent if event.headline then if type == "DONE" then remove_specific_tags(event.headline) end end end) -
New: Register the todo changes in the logbook.
You can now register the changes with events. Add this to your plugin config. If you're using lazyvim:
return { { "nvim-orgmode/orgmode", config = function() require("orgmode").setup({...}) local EventManager = require("orgmode.events") local Date = require("orgmode.objects.date") EventManager.listen(EventManager.event.TodoChanged, function(event) ---@cast event OrgTodoChangedEvent if event.headline then local current_todo, _, _ = event.headline:get_todo() local now = Date.now() event.headline:add_note({ 'State "' .. current_todo .. '" from "' .. event.old_todo_state .. '" [' .. now:to_string() .. "]", }) end end) end, }, } -
New: API usage.
Get the headline under the cursor
You have information on how to do it in this pr
Custom types can trigger functionality such as opening the terminal and pings the provided URL .
To add your own custom hyperlink type, provide a custom handler to
hyperlinks.sourcessetting. Each handler needs to have aget_name()method that returns a name for the handler. Additionally,follow(link)andautocomplete(link)optional methods are available to open the link and provide the autocompletion. ## Refile a headline to another destinationRefile a headline to another destination
You can do this with the API.
Assuming you are in the filewhere your TODOs are:
local api = require('orgmode.api') local closest_headline = api.current():get_closest_headline() local destination_file = api.load('~/org/journal.org') ocal destination_headline = vim.tbl_filter(function(headline) return headline.title == 'My journal' end, destination_file.headlines)[1]
api.refile({ source = closest_headline, destination = destination_headline })
-
New: Introduce the time navigation abstract identity concept.
An identity is the set of qualities, beliefs, personality traits, appearance, and/or expressions that characterize a person or a group.
Identity serves multiple functions, acting as a "self-regulatory structure" that provides meaning, direction, and a sense of self-control. It fosters internal harmony and serves as a behavioral compass, enabling individuals to orient themselves towards the future and establish long-term goals. As an active process, it profoundly influences an individual's capacity to adapt to life events and achieve a state of well-being. However, identity originates from traits or attributes that you may have little or no control over, such as their family background or ethnicity.
Identities then will be the guide of my life. I've tried setting essential goals, answering big questions with no success so far. This approach however looks more interesting because:
- I can split myself in many identities each with it's definition and analyse life through the different lenses, identify identity conflicts, priorize identities...
- I can analyse each identity on it's own, decide how to change my roadmap to integrate the ones I want to adopt and get away from the ones I want to leave behind.
- It reminds me of the RPG character building and although it may seem silly, that motivates me.
- It fits quite well with what I've learnt regarding habit management
The identities archive
I'm using a new notebook page called
identities.orgwhere I plan to analyze, build and evolve my identities.An identity section
Each identity may have the next sections. A simple heading with the name of the identity is just fine. You'll create the sections as long as you need them.
Analysis of the identity
Here we can develop our thoughts on what does the identity mean and what do you see yourself embodying the identity
Identity characteristics
List the values, habits, abilities, knowledge, capabilities and experiences that define the identity, and analyse each of them.
Identity plan
Here is where we can sketch the plan we want to follow to grow or shrink this identity. It contains a list of identity axis.
Children identities
Sometimes an identity can be refined in smaller more specific identities, here we'll add sections for each of them.
Identity analysis
Dump your thoughts on your identities
Before we get tainted with our past analysis imagine a fresh canvas and start painting yourself.
You can add a section in
think.orgto record your findings. I found that I needed some time to do this dump, working on the section through days before the actual analysis.Do an initial list of values
I first created a
global values or principlesheadline and listed all core principles such as:- All creatures are beautiful
- Be excellent to each other
- Better done than perfect
You may refactor them into the identities once you start building them.
Do an initial list of identities
Empty your mind of the different identities that define you or that you want to be defined by. Create headlines for each using the sections defined above as you need them.
Do an initial list of axis
Dream of what axis you'll want to address. If you can order them into identities.
Do the identity analysis
Refactor the gathered thoughts into the
identities.orgfileSelect the identities you want to prioritise in the year
- Skim over all identities and for the ones you want to focus in the year:
- add the
identitytag - assign a priority
- do not add a TODO keyword, we'll reserve them for the identity axis.
- Use the year custom agenda
identitiessection to adjust the priority of the different identities following the next guides: - Spread the identities over the different priorities so that each has more less the same number of elements.
- Compare an identity with the ones that are above or below and decide if you promote or demote it
- If you don't want one identity to bien the list add the
backlogtag if you don't want that identity and it's subidentities to appear. Add thehidetag in case you want any of the subidentities
Refine the identities
Following the priority order of identities go one by one until you run out of time and:
- Read the identity analysis and the different axis.
- Order the different axis by the impact they can have in the identity
- Refine the projects of each axis
-
Correction: Use unison to sync.
-
New: Exporting to pdf or markdown.
To pdf
If you want to convert it with python you first need to install the dependencies:
sudo apt install texlive-xetexThen you can do
pandoc input.org -o output.pdf --pdf-engine=xelatex -V geometry:margin=1in -V fontsize=11pt -V colorlinks=trueTo markdown
pandoc input.org -o output.md
Axis⚑
-
New: Not adding a todo state when creating a new element by default.
The default state
NOTEdoesn't add any state. -
New: Introduce the axis time navigation concept.
An identity axis is an abstract guide for action with an indefinite scope or a timeframe longer than one year. It serves as a high-level directional tool to materialise changes in your identities, helping to outline a general course without specifying exact destinations.
Limitations of an axis
- It is not suitable as a final destination on a roadmap because it is too ambiguous.
- Even minor progress could be considered sufficient, leading to a lack of clear endpoints.
- One could continue indefinitely without a sense of completion, such as continuously striving for improvement.
Axis orgmode representation
An axis is a headline or TODO headline (depending whether it's active or not) that is part of the
plansection of an identity.It can have none or many of the next sections:
Axis analysis section
To gather the thoughts regarding the study of the axis
Axis plan section
To gather the axis projects that will materialise the direction of the axis.
Axis projects
To prevent endless pursuit, an axis should be broken down into projects with defined scopes that indicate when to stop advancing in that direction.
Axis projects shall:
- Have a scope shorter than 11 months so they can be managed in the life stage review.
Depending on whether the axis projects will be acted upon in the current quarter they will have two possible representations:
- If you don't plan to it will be a project section with the análisis, and steps
- If you do, then it will be a link to the project section either in
projects.orgor inbacklog.org. You can leave the analysis section below the link inidentities.orgthat way it won't use precious space in yourprojects.orgfile.
Roadmap Adjustment⚑
-
New: Small annotations throughout the document.
The idea of each adjustment is that it's not cumbersome, so I've decided to set a fixed time for each one, I assume that I'll do my best on both processes and wherever I reach is just fine (remember, better done than perfect).
I believe on the power of constant small changes, so the next review will be built on top of the current one, and thus it will be done better and faster.
-
New: Do the first sketch of the year review.
As the year comes to an end it can be a good idea to review stuff that has a year cadence of change and that people are reviewing, for example:
- How the world has changed
- Relevant content stuff
- Life experiences
- Your toolset
- Your economical year review
I'm not going to review the year roadmap or how I've changed or my tactics or strategies right now as I feel it makes more sense to do it in the life review just before spring comes. In fact, we'll be able to do a better life review once we have the output of the review of the year.
Year review timeline
As you can see the amount of stuff to review is not something that can be done in a day, my current plan is to prepare the review of the year throughout December and carry it out on the first two weeks of January.
So I've scheduled an action each 1st of December with deadline on the 31st to:
- Create a new entry in
think.orgwith the next format:* Year review (YYYY) ** Review adjustments ** Sections *** How the world has changed *** ...
Then I created another action from the 1st of January to the 15th to actually do the review.
Year review phases
How the world has changed
You don't live alone in a bubble, your life is affected by what is going on around you, so if you can it's always good to analyse it so that you can adjust your roadmap accordingly.
Doing this on January makes a lot of sense because most newspapers and important people do a year review which synthesizes the most important year events and how it has changed throughout the year.
I'm usually gathering the analysis on the Year reviews article.
How did my economy change this year
I take the chance to do a last review of the year through the lenses of my accounting system (beancount). I've made some fava dashboards that gather the most interesting information.
Review the relevant content I've consumed
With the use of mediatracker and other life logging tools I take a look at what content I've enjoyed the most. I want to share it also with all of you through these articles:
-
New: First sketch of the life review.
Life reviews are meant to give you an idea of:
- How much have you and your workflows evolved
- What roadmap decisions were right, which ones were wrong
- With the context you have now, you can think of how you could have avoided the bad decisions.
If you have the year's planning you can analyze it against your task management tools and life logs and create a review document analyzing all.
Life review timeline
As you can see the amount of stuff to review is not something that can be done in a day, my current plan is to prepare the review from the 15th of December till the 15th of January and then carry it out until the 23rd of February, to leave space to do the spring quarter and March month reviews.
-
New: Adjust the month review process.
To record the results of the review create the section in
pages/reviews.orgwith the following template:* winter ** january review *** work *** personal **** month review ***** mental dump ****** What worries you right now? ****** What drained your energy or brought you down emotionally this last month? ****** What are the little things that burden you or slow you down? ****** What do you desire right now? ****** Where is your mind these days? ****** What did you enjoy most this last month? ****** What did help you most this last month? ****** What things would you want to finish throughout the month so you can carry them to the next? ****** What things do you feel you need to do? ****** What are you most proud of this month? ***** month checks ***** analyze ***** decideI'm assuming it's the january's review and that you have two kinds of reviews, one personal and one for work.
Dump your mind
The first thing we want to do in the review is to dump all that's in our mind into our system to free up mental load.
Try not to, but if you think of decisions you want to make that address the elements you're discovering, write them down in the
Decidesection of your review document.There are different paths to discover actionable items:
- Analyze what is in your mind: Take 10 minutes to answer to the questions of the template under the "mental dump" section (you don't need to answer them all). Notice that we do not need to review our life logging tools (diary, action manager, ...) to answer these questions. This means that we're doing an analysis of what is in our minds right now, not throughout the month. It's flawed but as we do this analysis often, it's probably fine. We add more importance to the latest events in our life anyway.
Clean your notebook
- Empty the elements you added to the
review box. I have them in my inbox with the tag:review:(you have it in the month agenda viewgM) - Clean your life notebook by:
- Iterate over the areas of
proyects.orgonly checking the first level of projects, don't go deeper and for each element:- Move the done elements either to
archive.orgorlogbook.org. - Move to
backlog.orgthe elements that don't make sense to be active anymore
- Move the done elements either to
- Check if you have any
DONEelement incalendar.org. - Empty the
inbox.org - Empty the
DONEelements oftalk.org -
Clean the elements that don't make sense anymore from
think.org -
Process your
month checks. For each of them: -
If you need, add action elements in the
mental dumpsection of the review. - Think of whether you've met the check.
Refresh your idea of how the month go
- Open your
bitácora.orgagenda view to see what has been completed in the last monthmatch = 'CLOSED>"<-30d>"-work-steps-done',ordered by nameorg_agenda_sorting_strategy = { "category-keep" },and change the priority of the elements according to the impact. Open yourrecurrent.orgagenda view to see what has been done the last monthmatch = 'LAST_REPEAT>"<-30d>"-work' - Check what has been left of your month objectives
+mand refile the elements that don't make sense anymore. - Check the reports of your weekly reviews of the month in the
reviews.orgdocument.
Check your close compromises
Check all your action management tools (in my case
orgmodeandikhal) to identify: - Arranged compromises - trips -
Create next stage's life notebook
After reading "The Bulletproof Journal", I was drawn to the idea of changing notebooks each year, carrying over only the necessary things.
I find this to be a powerful concept since you start each stage with a clean canvas. This brings you closer to desire versus duty as it removes the commitments you made to yourself, freeing up significant mental load. From this point, it's much easier to allow yourself to dream about what you want to do in this new stage.
I want to apply this concept to my digital life notebook as I see the following advantages:
- It lightens my files making them easier to manage and faster to process with orgmode
- It's a very easy way to clean up
- It's an elegant way to preserve what you've recorded without it becoming a hindrance
- In each stage, you can start with a different notebook structure, meaning new axes, tools, and structures. This helps avoid falling into the rigidity of a constrained system or artifacts defined by inertia rather than conscious decision
- It allows you to avoid maintaining files that follow an old scheme or having to migrate them to the new system
- Additionally, you get rid of all those actions you've been reluctant to delete in one fell swoop
The notebook change can be done in two phases:
- Notebook Construction
- Stage Closure
Notebook Construction
This phase spans from when you start making stage adjustments until you finally close the current stage. You can follow these steps:
- Create a directory with the name of the new stage. In my case, it's the number of my predominant age during the stage
- Create a directory for the current stage's notebook within "notebooks" in your references. Here we'll move everything that doesn't make sense to maintain. It's important that this directory isn't within your agenda files
- Quickly review the improvements you've noted that you want to implement in next year's notebook to keep them in mind. You can note the references in the "Create new notebook" action
As you review the stage, decide if it makes sense for the file you're viewing to exist as-is in the new notebook. Remember that the idea is to migrate minimal structure and data.
- If it makes sense:
- Create a symbolic link in the new notebook. When closing the stage, we'll replace the link with the file's final state
- If the file no longer makes sense, move it to
references/notebooks
-
New: Life stage roadmap adjustment adjustments.
Review what you've done the last year
Read your
logbook.org(obitácora.org) adjusting the priorities of the áreas, projects and actions thinking of the impact the element has meant in your live.Review what you've learn the last year
It's always interesting to look back and see what you've learned throughout the year. I have these sources of data:
If you happen to have a digital garden you can look at your git history to know what has changed since the last year. That's cumbersome and ugly though, it's better to review your newsletters, although you may need to use something like
mkdocs-newsletter.While you skim through the newsletters you can add to the analysis report the highlights of what you've learned.
You can also check your repository insights.
I use
ankito record the knowledge that I need to have in my mind. The program has a "Stats" tab where you can see your insights of the last years to understand how are you learning. You can also go to the "Browse" tab to sort the cards by created and get an idea of which ones have been the most used decks.Review what programs you have developed
Update your identities
Follow the steps of identity management.
Year reviews⚑
-
New: Primera versión de la revisión del 2024.
Fuentes
- Boletín de socias de El Salto 2024
- Fotografías de un año en el que no se pudo parar el genocidio
- Boletín de diciembre de El Salto 2024
- Siete palabras para entender feminismos 2024
Fascismo
En 2024, a nivel estatal la extrema derecha avanzó posiciones y cuotas de poder, pero las movilizaciones masivas por la vivienda, contra el modelo turístico y la gestión de la dana desafiaron abiertamente su relato del mundo y su reparto de culpas.
A nivel mundial la cosa va incluso peor:
- estados unidos, la principal potencia económica y militar del mundo ha vuelto a caer en manos de Trump, un multimillonario sádico, que ha conseguido todo su poder a través de su gigantesco poder mediático y un ejército de trolls, bulos y paramilitares supremacistas vestidos con pasamontañas y fusiles de asalto. Los principales países del mundo ya han caído o están a punto de caer en las redes de esta internacional de la desinformación, respaldada y promocionada a su vez por cuatro de los cinco hombres más ricos del mundo.
Uno de ellos es elon musk, un ser despreciable que tras cargarse twitter está haciendo campaña por las ultraderechas como la alemana afd. - en argentina ha estado gobernando javier milei desde el 10 de diciembre de 2023, básicamente se está cargando el estado de derecho y todos los avances en políticas feministas que se consiguieron con tanto esfuerzo.
Antifascismo
Las resistencias a las ideas que propugna esta coalición mundial neofascista, que culpa a la migración y a los pobres de todos los males, que niega el cambio climático, los derechos LGTBIQ+ y de las mujeres, pasan por otro lado.
Desde El Salto tenemos la convicción de que el resultado de esta contienda se dirimirá en gran parte en el campo de la información, de los medios y las redes sociales. Por eso creemos que es tan importante, ahora más que nunca, contar con medios propios, que no dependan de grandes poderes económicos o intereses partidarios.

Manifestación antiracista en las calles de Londres.
Vivienda
Las gigantescas movilizaciones por la vivienda y contra el modelo depredador del turismo que recorrieron 2024 desde las Islas Canarias, pasando por Baleares, Madrid, Barcelona y casi todas las grandes capitales españolas, son una buena demostración de que el final de esta película está abierto. Después de estas demostraciones de fuerza y significado, la vivienda no es más barata, pero son menos las personas que piensan que el problema de fondo son los “inquiokupas”.

Canarias se plantó en el mes de abril ante el turismo de masas en una movilización histórica simultánea en las ocho islas.

Manifestación contra la especulación inmobiliaria y por el derecho a la vivienda en octubre.

Los desahucios han continuado durante el 2024, al amparo de la cobertura jurídica ofrecida a los especuladores que trafican con el derecho a la vivienda digna, en contra de lo que establece la Constitución Española.
Un amanecer en el barrio de Lavapiés, mientras el vecindario espera un nuevo desahucio en su calle, que finalmente se ejecutó, dejando a otra familia en la calle.Cambio climático
La Dana
Similar lucha por el discurso y reparto de culpas se produjo tras la dana en València, donde los intentos de la ultraderecha de dirigir el debate hacia los saqueos realizados supuestamente por migrantes fueron frustrados por la emocionante respuesta popular para hacer llegar una ayuda que ninguna institución, ni local ni estatal, estaba proporcionando.
Una senyera del Pais Valencià en una ventana de Paiporta, tras el paso de la dana.
El cuerpo de un perro flota junto a los restos de cañizo y arboles arrastrados por la Dana a la playa de Pinedo. La pérdida de biodiversidad, víctima y a la vez solución de esta crisis, también está detrás de lo que está ocurriendo. Restaurar la naturaleza es clave para protegernos frente a eventos extremos a la vez que es una oportunidad transformadora para tener unos entornos resilientes y sanos para la biodiversidad y las personas. Para esto se necesitan medidas desde el nivel local hasta el global.

No obstante la población valenciana (y la del resto del estado) ha hecho ver el descontento con la mala gestión que hizo el gobierno del PP en la comunidad como se puede ver en la manifestación multitudinaria del 30 de noviembre para pedir la dimisión del president Carlos Mazón.
Feminismo
Este 2024 también ha estado marcado por las luchas feministas: el terremoto mediático y social provocado por las denuncias anónimas de violencia sexual en redes ha trastocado el panorama político, ha revuelto el mundo del cine y del teatro, y sobre todo, ha resquebrajado el armazón de impunidad que recubre las agresiones machistas. Más allá de dónde acabe todo esto, el movimiento feminista ha vuelto a demostrar su enorme potencia de cambio en un sentido opuesto a la agenda de la extrema derecha.

El 25 de noviembre, Día Internacional de la Eliminación de la Violencia contra la Mujer, se celebró en medio de una intensa conversación en torno a las violencias sexuales.
Caso Pelicot

El 19 de diciembre de 2024 conocimos la sentencia del caso Pelicot. Dominique fue condenado a 20 años, la pena máxima. El Tribunal de Aviñón (Francia) que lo juzgaba lo consideró culpable de un delito de violación agravada contra ahora ya exesposa Gisèle Pelicot. Desde julio de 2011 y hasta octubre de 2020, el hombre utilizó webs de citas para invitar a desconocidos a agredir sexualmente a su entonces esposa, a la que previamente había suministrado una cantidad de ansiolíticos suficiente como para acercarla “al estado de coma”. Los otros casi cincuenta acusados han sido declarados culpables de violación con penas de entre tres y 15 años.
Gisèle también ha hecho alusión a la vergüenza cuando ha explicado por qué todo el mundo la conoce por el apellido de su exmarido y agresor: “Tengo nietos y no quiero que se avergüencen de usar ese apellido. Quiero que estén orgullosos de su abuela. A partir de hoy se recordará a la señora Pelicot y cada vez menos al señor Pelicot”, explicó esta mujer que aunque ha recuperado su apellido tras divorciarse no quiere regalarle el nombre de sus nietos a un violador.
La consigna “la vergüenza debe cambiar de bando” se ha repetido más allá de Francia y tuvo un lugar destacado en las convocatorias del 25 de noviembre, Día por la Eliminación de la Violencia contra las Mujeres.
Caso Errejón
El 23 de octubre, el testimonio de una mujer, publicado en la cuenta de la periodista Cristina Fallarás en Instagram corre por las redes. La mujer señala a “un político de Madrid” como un “maltratador psicológico”. En pocas horas, las redes identifican a Íñigo Errejón. El jueves 25 de octubre, Errejón anuncia que abandona la política y esa misma tarde la actriz Elisa Mouliaá presenta una denuncia ante la Policía en la que relata una agresión sexual.
Migración
El fin de año además nos arroja otra cifra, no por esperada menos demoledora: la estimación de más de diez mil personas muertas tratando de llegar a nuestro país. Personas que, por mucho que se empeñe una extrema derecha en auge mediático, solo buscaban mejorar en algo sus vidas y que la perdieron por un sistema cruel que denomina a este otro genocidio “crisis migratoria”.

Las fronteras españolas han marcado este 2024 un récord de la vergüenza: 10.457 personas han muerto tratando de alcanzar las costas de la península. Imagen de la manifestación antirracista el pasado 9 de noviembre, Madrid.
Movimientos sociales
Este año ha sido duro para las okupas en la ciudad de madríd. Se desalojaron el CSO La Ferroviaria y el CSO La Atalaya

Integrantes del CSO La Atalaya sacan materiales que formaban parte de las decenas de actividades que ofrecía el centro social, mientras la zona se encuentra acordonada por agentes de Policía Nacional.
Servicios públicos
Las asambleas de estudiantes piden seguir la reivindicación de tener unos presupuestos que permitan la supervivencia de la universidad pública en una autonomía donde se impulsa más a la enseñanza privada. InternacionalPalestina
En diciembre, Israel ha enfocado sus ataques sobre los hospitales del norte de Gaza.
ejército que ha llevado a cabo un genocidio que dura ya más de 440 días.
Comenzamos el año con los ojos puestos en Gaza y Cisjordania y pendientes de la vida de cientos de miles de palestinos amenazados en todo el mundo por el estado sionista de Israel. La cosa no ha ido a mejor en estos 12 meses y a este genocidio calculado le han acompañado otras desgracias, como la creciente crisis ambiental con efectos cada vez más recurrentes en todo el planeta, incluida nuestra españita. En Valencia 231 personas han perdido la vida y 4 más permanecen desaparecidas a causa de una dana que arrasó en pocas horas la comunidad, muertes que quizá pudieron haberse evitado si los responsables políticos hubieran actuado con algo de los que se les presupone, vocación de servicio público, en vez de esconder sus miserias y tratar a toda costa judicial de evitar responsabilidades.
Tras la elección de trump el panorama de palestina no ha hecho más que empeorar (aunque harris tampoco tenía pinta que se fuese a mojar lo más mínimo).
No obstante la sociedad está respondiendo como buenamente puede para presionar a sus gobiernos a que paren el genocidio.

En el mes de mayo, cientos de estudiantes tomaron la Universidad Humboldt de Berlín en apoyo a Gaza.

La batalla naval de Vallecas en 2024, con especial recuerdo para Palestina.
Siria

Miembros árabes de las Fuerzas Democráticas Sirias en el interior del estadio de Rakka, que el Estado Islámico había convertido en prisión. Honduras

Honduras heredó del narcogobierno de Juan Orlando Hernández la violencia que ejercen de forma estructural la Policía, el Ejército, las maras y los sicarios contratados por las empresas para matar a líderes ambientales. Corrupción
República
Este año la monarquía ha caído algo en el ideario popular, aunque no lo suficiente.

Momentos de tensión en la visita de los reyes y representantes políticos 5 días después del paso de la Dana en Valencia. La liaron parda porque impidieron que llegase ayuda voluntaria para que pasase l séquito real. No sentó nada bien, por lo que sea.
-
New: Cositas del 2025.
Fascismo
En el acto de toma de posesión del cargo de trump, elon musk hace el saludo nazi.
Feminismo
Life chores management⚑
aerc⚑
-
New: Introduce aerc email command line client.
aerc is an email client that runs in your terminal.
Some of its more interesting features include:
- Editing emails in an embedded terminal tmux-style, allowing you to check on incoming emails and reference other threads while you compose your replies
- Render HTML emails with an interactive terminal web browser, highlight patches with diffs, and browse with an embedded less session
- Vim-style keybindings and ex-command system, allowing for powerful automation at a single keystroke
- First-class support for working with git & email
- Open a new tab with a terminal emulator and a shell running for easy access to nearby git repos for parallel work
- Support for multiple accounts, with IMAP, Maildir, Notmuch, Mbox and JMAP backends. Along with IMAP, JMAP, SMTP, and sendmail transfer protocols.
- Asynchronous IMAP and JMAP support ensures the UI never gets locked up by a flaky network.
- Efficient network usage - aerc only downloads the information which is necessary to present the UI, making for a snappy and bandwidth-efficient experience
- Email threading (with and/or without IMAP server support).
- PGP signing, encryption and verification using GNUpg.
- 100% free and open source software!
Source
Download the latest version Compile it with the repo instructions
Debian
The debian version is very old, compile it directly
sudo apt-get install aercDocumentation
The docs are few and hard to read online, but there are throughout in local.
If you're lost you can always run again the tutorial with
:help tutorialConfiguration
On its first run, aerc will copy the default config files to
~/.config/aercon Linux. When you start the program for the first time a wizard will configure an account and start up the tutorial.Read Bence post, it's a nice guideline.
Notmuch can be used directly as a backend for several email clients, including alot, dodo, Emacs, vim and (more importantly for us) aerc. While it can be used on its own, we are going to use it for its search index, and ability to seamlessly operate over multiple accounts' maildir folder. This will provide us with the ability to search all of our email regardless of account, and to show a unified overview of certain folders, e.g. a unified inbox. If you are only setting this up for a single account, I still recommend using notmuch for its search capabilities.
If these guidelines don't work, try this others.
Monitorization
If you are using the mailsync scripts proposed above or on Bence's post you can check if the service failed with:
groups: - name: email rules: - alert: MailsyncError expr: | count_over_time({user_service_name=~"mailsync-.*"} |= `Failed` [15m]) > 0 for: 0m labels: severity: warning annotations: summary: "Error syncing the email with service {{ $labels.user_service_name }} at {{ $labels.host}}"It assumes that you have the
user_service_namelabel defined in your logs. I create them with vector with the next config:transforms: journald_labels: type: remap inputs: - journald_filter source: | .service_name = ._SYSTEMD_UNIT || "unknown" .user_service_name = .USER_UNIT || ._SYSTEMD_USER_UNIT || "unknown"Usage
aerchas many commands that can be bound to keybindings, to see them all checkman 1 aerc.Main page
, : Cycles to the previous or next tab - k, j: Scrolls up and down between messages
, : Scrolls half a page up or down - g, G: Selects the first or last message, respectively
- K, J: Switches between folders in the sidebar
: Opens the selected message
You can also search the selected folder with /, or filter with . When searching you can use n and p to jump to the next and previous result. Filtering hides any non-matching message.
Message viewer
Press
to open a message. By default, the message viewer will display your message using less(1). This should also have familiar, vim-like keybindings for scrolling around in your message. Multipart messages (messages with attachments, or messages with several alternative formats) show a part selector on the bottom of the message viewer.
, : Cycle between parts of a multipart message - q: Close the message viewer
- f: next message
- b: previous message
To show HTML messages, uncomment the text/html filter in your aerc.conf file (which is probably in ~/.config/aerc/) and install its dependencies: w3m and dante-utils.
You can also do many tasks you could do in the message list from here, like replying to emails, deleting the email, or view the next and previous message (J and K).
Some interesting commands are:
:unsubscribe: Attempt to automatically unsubscribe the user from the mailing list through use of the List-Unsubscribe header. If supported, aerc may open a compose window pre-filled with the unsubscribe information or open the unsubscribe URL in a web browser.
Composing messages
- C: Compose a new message
- rr: Reply-all to a message
- rq: Reply-all to a message, and pre-fill the editor with a quoted version of the message being replied to
- Rr: Reply to a message
- Rq: Reply to a message, and pre-fill the editor with a quoted version of the message being replied to
The message composer will appear. You should see To, From, and Subject lines, as well as your $EDITOR. You can use
or and to cycle between these fields (tab won't cycle between fields once you enter the editor, but and will). References
Interesting configurations
himalaya⚑
-
Correction: Writing an importer.
NOTE: since 3.0.0 the importers need to be done with
beangulp. I've tried using it but found it confusing so I fell back to 2.x eCheck a list of already existing importers here
Once you have your importer built up you might want to spice it up with smart_importer
-
New: Configure GPG.
Himalaya relies on cargo features to enable gpg. You can see the default enabled features in the Cargo.toml file. As of 2025-01-27 the
pgp-commandsis enabled.You only need to add the next section to your config:
pgp.type = "commands"And then you can use both the cli and the vim plugin with gpg. Super easy
dawarich⚑
-
New: Add interesting article to merge all protocols under matrix.
-
New: Introduce dawarich.
Dawarich is a self-hostable alternative to Google Location History (Google Maps Timeline)
Tweak the official docker-compose keeping in mind:
- To configure the
APPLICATION_HOSTif you're using a reverse proxy
Then run
docker compose up. You can now visit your Dawarich instance at http://localhost:3000 or http://:3000. The default credentials are demo@dawarich.appandpasswordGo to your account and change the default account and password.
Be careful not to upgrade with watchtower, the devs say that it's not safe yet to do so.
Not there yet - Immich photos are not well shown: This happens when opening the map, or selecting one of the buttons "Yesterday", "Last 7 Days" or "Last month". If you select the same date range through the date-pickers, the photos are shown. - Support import of OSMand+ favourites gpx - OpenID/Oauth support
References
- To configure the
-
New: How to see the coordinates of a point.
You need to enable the "Points" layer, which is on the layers menu at the top right of the map.
Then if you click on one point you can get the coordinates
Signal⚑
-
New: Signal bots.
To write signal bots you can use this library
-
New: How to set a master password.
You can't, it's not supported and it doesn't look that it will (1, 2)
-
New: Add note on AWS use.
It runs on AWS. On October 2025 there was an AWS outage and signal fell
Content Management⚑
Jellyfin⚑
-
New: Add note on apollo.
Also checkout apollo a sunshine fork.
-
New: How to rollback.
Copy the
/srv/jellyfin/appdirectory tobk.appand then restore the directory from backup.If the permissions have gone amiss you'll need to fix them before you start the server or you'll get an
attempt to write a readonly databaseerror feat(linux_snippets#Resize an image): Resize an imageTo resize an image, you can use the convert command from ImageMagick. For example, to resize an image named input.jpg to have a width of 800 pixels while maintaining the aspect ratio, use the following command:
convert input.jpg -resize 800x input_resized.jpgYou can also specify both width and height if you want to resize the image to specific dimensions:
convert input.jpg -resize 800x600 input_resized_exact.jpg
Book Management⚑
-
New: Convert images based pdf to epub.
NOTE: before proceeding inspect the next tools that use AI so it will probably give a better output:
If the pdf is based on images
Then you need to use OCR to extract the text.
First, convert the PDF to images:
pdftoppm -png input.pdf pageApply OCR to your PDF
Use
tesseractto extract text from each image:for img in page-*.png; do tesseract "$img" "${img%.png}" -l eng doneThis produces
page-1.txt,page-2.txt, etc. -
New: Protect ombi behind authentik.
This option allows the user to select a HTTP header value that contains the desired login username.
Note that if the header value is present and matches an existing user, default authentication is bypassed - use with caution.
This is most commonly utilized when Ombi is behind a reverse proxy which handles authentication. For example, if using Authentik, the X-authentik-username HTTP header which contains the logged in user's username is set by Authentik's proxy outpost.
Book DRM⚑
-
New: How to remove DRM from ebooks.
To remove the DRM from ebooks you can use DeDRM_tools.
Installation
You need to download the latest release, and follow the instructions of the README.md of the zip file.o
Usage
Once the plugin is installed, you can import the books you want to remove the DRM from. In theory the plugin only works on import and not on convert, but I've also used the convert tool to make sure it's a different file.
Beets⚑
-
New: Add forensic architecture.
- Forensic architecture: Forensic Architecture (FA) is a research agency based at Goldsmiths, University of London. Our mandate is to develop, employ, and disseminate new techniques, methods, and concepts for investigating state and corporate violence. Our team includes architects, software developers, filmmakers, investigative journalists, scientists, and lawyers.
-
New: Add plugins.
Programmatically interact with beets
There is the python api reference but I don't see how to import for example.
Knowledge Management⚑
Anki⚑
-
New: File location.
On Linux:
- recent Anki versions store your user data in
~/.local/share/Anki2, or$XDG_DATA_HOME/Anki2if you have set a custom data path. - Anki’s launcher is installed in
/usr/local/share/anki - When you install/update Anki with the launcher, it downloads support files and places them in
~/.local/share/AnkiProgramFiles
Removing that folder will cause the launcher to behave like a fresh install.
The AnkiProgramFiles contains all the files needed to run Anki aside from the launcher.
- recent Anki versions store your user data in
Torrent management⚑
qBittorrent⚑
Health⚑
Silence⚑
-
New: Introduce the Right to quiet collective.
- Right to quiet: The Right to Quiet Society for Soundscape Awareness and Protection was founded in Vancouver, British Columbia in 1982 as a charitable organization with the mission of raising public awareness of the detrimental effects of noise on health; promoting awareness of noise pollution and the dangers of noise to our physical, emotional, and spiritual wellbeing; working for noise reduction through better regulation and enforcement; encouraging responsible behaviour regarding noise; advocating for manufacturing quieter products; and fostering recognition of the right to quiet as a basic human right, rather than as an amenity for the affluent.
- Right to quiet resources
Technology⚑
Coding⚑
Bash snippets⚑
-
New: Add context switches column meanings.
- UID: The real user identification number of the task being monitored.
- USER: The name of the real user owning the task being monitored.
- PID: The identification number of the task being monitored.
- cswch/s: Total number of voluntary context switches the task made per second. A voluntary context switch occurs when a task blocks because it requires a resource that is unavailable.
- nvcswch/s: Total number of non voluntary context switches the task made per second. A involuntary context switch takes place when a task executes for the duration of its time slice and then is forced to relinquish the processor.
- Command: The command name of the task.
PDM⚑
-
New: Load a plugin on startup.
You can define the
lazy = falsein your plugin specreturn { -- the colorscheme should be available when starting Neovim { "folke/tokyonight.nvim", lazy = false, -- make sure we load this during startup if it is your main colorscheme } }diff --git a/docs/linux/wireguard.md b/docs/linux/wireguard.md index d69f315a8e..6ccbaa011c 100644 --- a/docs/linux/wireguard.md +++ b/docs/linux/wireguard.md @@ -16,41 +16,42 @@ VPN solution in the industry.
-
Correction: Suggest to check uv.
Maybe use uv instead (although so far I'm still using
pdm)
Vim Snippets⚑
-
New: Search for different strings in the same search query.
* DONE\|* REJECTED\|* DUPLICATED -
New: Upgrading python version of all your pipx packages.
If you upgrade the main python version and remove the old one (a dist upgrade) then you won't be able to use the installed packages.
If you're lucky enough to have the old one you can use:
pipx reinstall-all --python <the Python executable file>Otherwise you need to export all the packages with
pipx list --json > ~/pipx.jsonThen reinstall one by one:
set -ux if [[ -e ~/pipx.json ]]; then for p in $(cat ~/pipx.json | jq -r '.venvs[].metadata.main_package.package_or_url'); do pipx install $p done fiThe problem is that this method does not respect the version constrains nor the injects, so you may need to debug each package a bit.
-
New: Push to a forgejo docker registry.
Login to the container registry
To push an image or if the image is in a private registry, you have to authenticate:
docker login forgejo.example.comIf you are using 2FA or OAuth use a personal access token instead of the password.
Image naming convention
Images must follow this naming convention:
{registry}/{owner}/{image}When building your docker image, using the naming convention above, this looks like:
docker build -t {registry}/{owner}/{image}:{tag} . docker tag {some-existing-image}:{tag} {registry}/{owner}/{image}:{tag}Where your registry is the domain of your forgejo instance (e.g. forgejo.example.com). For example, these are all valid image names for the owner testuser:
- forgejo.example.com/testuser/myimage
- forgejo.example.com/testuser/my-image
- forgejo.example.com/testuser/my/image
NOTE: The registry only supports case-insensitive tag names. So image:tag and image:Tag get treated as the same image and tag.
Push an image
Push an image by executing the following command:
docker push forgejo.example.com/{owner}/{image}:{tag}For example:
docker push forgejo.example.com/testuser/myimage:latest -
New: Installing all the binaries the application installs.
If the package installs more than one binary (for example
ansible), you need to use the--install-depsflagpipx install --install-deps ansible -
New: Create quickmarks across files.
Use capital letters for the mark.
http://vim.wikia.com/wiki/Using_marks
Marks can span across files. To use such marks one has to use upper-case registers i.e. A-Z. Lower-case registers are used only within files and do not span files. That's to say, if you were to set a mark in a file foo.c in register "a" and then move to another file and hit 'a, the cursor will not jump back to the previous location. If you want a mark which will take you to a different file then you will need to use an upper-case register. For example, use mA instead of ma.
Configure Docker to host the application⚑
-
New: Syntax rules.
Boolean operations
Not equal operation In Lua, you can perform a "not equal" comparison using the
~=operator:a ~= b -- true if a is not equal to bList operations
Length of a list
local current_slide = {} if #current_slide > 0 then -- code end -
New: Configure gitea.
Check the configuration sheet or the default values
-
New: Upgrade the gitea actions runner.
- Check in the releases the last version and the changelog
- Deploy the new version
- Restart the service
-
New: Upgrade gitea.
Check the Changelog for breaking changes
To make Gitea better, some breaking changes are unavoidable, especially for big milestone releases. Before upgrading, please read the Changelog on Gitea blog and check whether the breaking changes affect your Gitea instance.
Verify there are no deprecated configuration options
New versions of Gitea often come with changed configuration syntax or options which are usually displayed for at least one release cycle inside at the top of the Site Administration panel. If these warnings are not resolved, Gitea may refuse to start in the following version.
Make a backup
- docker pull the latest Gitea release.
- Stop the running instance, backup data.
- Use docker or docker-compose to start the newer Gitea Docker container.
A script automating the following steps for a deployment on Linux can be found at contrib/upgrade.sh in Gitea's source tree.
- Download the latest Gitea binary to a temporary directory.
- Stop the running instance, backup data.
- Replace the installed Gitea binary with the downloaded one.
- Start the Gitea instance.
Read the script to see what it's going to do. To upgrade to
1.20.5you can use:./update.sh -v 1.20.5If you have a different home directory for gitea you can set
giteahome=/var/gitea ./update.sh -v 1.20.5 -
New: Limit the access of a docker on a server to the access on the docker of another server.
WARNING: I had issues with this path and I ended up not using docker swarm networks.
If you want to restrict access to a docker (running on server 1) so that only another specific docker container running on another server (server 2) can access it. You need more than just IP-based filtering between hosts. The solution is then to:
-
Create a Docker network that spans both hosts using Docker Swarm or a custom overlay network.
-
Use Docker's built-in DNS resolution to allow specific container-to-container communication.
Here's a step-by-step approach:
1. Set up Docker Swarm (if not already done)
On server 1:
docker swarm init --advertise-addr <ip of server 1>This will output a command to join the swarm. Run that command on server 2.
2. Create an overlay network
docker network create --driver overlay --attachable <name of the network>3. Update the docker compose on server 1
magine for example that we want to deploy wg-easy.
services: wg-easy: image: ghcr.io/wg-easy/wg-easy:latest container_name: wg-easy networks: - wg - <name of the network> # Add the overlay network volumes: - wireguard:/etc/wireguard - /lib/modules:/lib/modules:ro ports: - "51820:51820/udp" # - "127.0.0.1:51821:51821/tcp" # Don't expose the http interface, it will be accessed from within the docker network restart: unless-stopped cap_add: - NET_ADMIN - SYS_MODULE sysctls: - net.ipv4.ip_forward=1 - net.ipv4.conf.all.src_valid_mark=1 - net.ipv6.conf.all.disable_ipv6=1 networks: wg: # Your existing network config <name of the network>: external: true # Reference the overlay network created above4. On server 2, create a Docker Compose file for your client container
services: wg-client: image: your-client-image container_name: wg-client networks: - <name of the network> # Other configuration for your client container networks: <name of the network>: external: true # Reference the same overlay network5. Access the WireGuard interface from the client container
Now, from within the client container on server 2, you can access the WireGuard interface using the container name:
http://wg-easy:51821This approach ensures that:
- The WireGuard web interface is not exposed to the public (not even localhost on server 1)
- Only containers on the shared overlay network can access it
- The specific container on server 2 can access it using Docker's internal DNS
Testing the network is well set
You may be confused if the new network is not shown on server 2 when running
docker network lsbut that's normal. Server 2 is a swarm worker node. The issue with not seeing the overlay network on server 2 is actually expected behavior - worker nodes cannot list or manage networks directly. However, even though you can't see them, containers on the server 2 can still connect to the overlay network when properly configured.To see that the swarm is well set you can use
docker node lson server 1 (you'll see an error on server 2 as it's a worker node)Weird network issues with swarm overlays
I've seen cases where after a server reboot you need to remove the overlay network from the docker compose and then add it again.
After many hours of debugging I came with the patch of removing the overlay network from the docker-compose and attaching it with the systemd service
[Unit] Description=wg-easy Requires=docker.service After=docker.service [Service] Restart=always User=root Group=docker WorkingDirectory=/data/apps/wg-easy TimeoutStartSec=100 RestartSec=2s ExecStart=/usr/bin/docker compose -f docker-compose.yaml up ExecStartPost=/bin/bash -c '\ sleep 30; \ /usr/bin/docker network connect wg-easy wg-easy; \ ' ExecStop=/usr/bin/docker compose -f docker-compose.yaml down [Install] WantedBy=multi-user.target -
-
New: Configure triggers not to push to a branch.
There is now a branches-ignore option:
on: push: branches-ignore: - main -
New: Not there yet.
- Being able to run two jobs on the same branch: It will be implemented with concurrency with this pr. This behavior didn't happen before 2023-07-25
-
New: Push an image with different architectures after building it in different instances.
To push both an ARM and AMD Docker image to Docker Hub, from two separate machines (e.g., an ARM-based and an AMD-based instance), follow these steps:
Tag the image correctly on each architecture
On each instance, build your image as normal, but tag it with a platform-specific suffix, like
myuser/myimage:arm64ormyuser/myimage:amd64.On the ARM machine:
docker build -t myuser/myimage:arm64 . docker push myuser/myimage:arm64On the AMD machine:
docker build -t myuser/myimage:amd64 . ocker push myuser/myimage:amd64Create a multi-architecture manifest (on one machine)
If you want users to pull the image without worrying about the platform (e.g., just
docker pull myuser/myimage:latest), you can create and push a manifest list that combines the two:Choose either machine to run this (after both images are pushed):
docker manifest create myuser/myimage:latest \ --amend myuser/myimage:amd64 \ --amend myuser/myimage:arm64 docker manifest push myuser/myimage:latest -
Correction: Add note to run gitea actions runners in kubernetes.
Kubernetes act runner, there is a stale issue and an ugly implementation to run docker in docker inside the kubernetes node.
-
Correction: Push an image with different architectures after building it in different instances.
To push both an ARM and AMD Docker image to a Docker registry, from two separate machines (e.g., an ARM-based and an AMD-based instance), you have two options:
- Run two different pipelines and then build a manifest
- Use two buildx remotes
QEMU was discarted because it took too long to build the images.
-
New: Do a copy of a list of docker images in your private registry.
set -euo pipefail SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" IMAGE_LIST_FILE="${1:-${SCRIPT_DIR}/bitnami-images.txt}" TARGET_REGISTRY="${2:-}" if [[ -z "$TARGET_REGISTRY" ]]; then echo "Usage: $0 <image_list_file> <target_registry>" echo "Example: $0 bitnami-images.txt your.docker.registry.org" exit 1 fi if [[ ! -f "$IMAGE_LIST_FILE" ]]; then echo "Error: Image list file '$IMAGE_LIST_FILE' not found" exit 1 fi log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" } extract_image_name_and_tag() { local full_image="$1" # Remove registry prefix to get org/repo:tag # Examples: # docker.io/bitnami/discourse:3.4.7 -> bitnami/discourse:3.4.7 # registry-1.docker.io/bitnami/os-shell:11-debian-11-r95 -> bitnami/os-shell:11-debian-11-r95 if [[ "$full_image" =~ ^[^/]*\.[^/]+/ ]]; then # Contains registry with dot - remove everything up to first / echo "${full_image#*/}" else # No registry prefix echo "$full_image" fi } pull_and_push_multiarch() { local source_image="$1" local target_registry="$2" local image_name_with_tag image_name_with_tag=$(extract_image_name_and_tag "$source_image") local target_image="${target_registry}/${image_name_with_tag}" log "Processing: $source_image -> $target_image" local pushed_images=() local architectures=("linux/amd64" "linux/arm64") local arch_suffixes=("amd64" "arm64") # Try to pull and push each architecture for i in "${!architectures[@]}"; do local platform="${architectures[$i]}" local arch_suffix="${arch_suffixes[$i]}" log "Attempting to pull ${platform} image: $source_image" if sudo docker pull --platform "$platform" "$source_image" 2>/dev/null; then log "Successfully pulled ${platform} image" # Tag with architecture-specific tag for manifest creation local arch_specific_tag="${target_image}-${arch_suffix}" sudo docker tag "$source_image" "$arch_specific_tag" log "Pushing ${platform} image as ${arch_specific_tag}" if sudo docker push "$arch_specific_tag"; then log "Successfully pushed ${platform} image" pushed_images+=("$arch_specific_tag") else log "Failed to push ${platform} image" sudo docker rmi "$arch_specific_tag" 2>/dev/null || true fi else log "⚠️ ${platform} image not available for $source_image - skipping" fi done if [[ ${#pushed_images[@]} -eq 0 ]]; then log "❌ No images were successfully pushed for $source_image" return 1 fi # Create the main tag with proper multi-arch manifest if [[ ${#pushed_images[@]} -gt 1 ]]; then log "Creating multi-arch manifest for $target_image" # Remove any existing manifest (in case of retry) sudo docker manifest rm "$target_image" 2>/dev/null || true if sudo docker manifest create "$target_image" "${pushed_images[@]}"; then # Annotate each architecture in the manifest for i in "${!pushed_images[@]}"; do local arch_tag="${pushed_images[$i]}" local arch="${arch_suffixes[$i]}" sudo docker manifest annotate "$target_image" "$arch_tag" --arch "$arch" --os linux done log "Pushing multi-arch manifest to $target_image" if sudo docker manifest push "$target_image"; then log "✅ Successfully pushed multi-arch image: $target_image" else log "❌ Failed to push manifest for $target_image" return 1 fi else log "❌ Failed to create manifest for $target_image" return 1 fi else # Only one architecture - tag and push directly log "Single architecture available, pushing as $target_image" sudo docker tag "${pushed_images[0]}" "$target_image" if sudo docker push "$target_image"; then log "✅ Successfully pushed single-arch image: $target_image" else log "❌ Failed to push $target_image" return 1 fi fi # Clean up local images to save space sudo docker rmi "$source_image" "${pushed_images[@]}" 2>/dev/null || true if [[ ${#pushed_images[@]} -eq 1 ]]; then sudo docker rmi "$target_image" 2>/dev/null || true fi return 0 } main() { log "Starting multi-architecture image pull and push" log "Source list: $IMAGE_LIST_FILE" log "Target registry: $TARGET_REGISTRY" # Enable experimental CLI features for manifest commands export DOCKER_CLI_EXPERIMENTAL=enabled total_images=$(wc -l "$IMAGE_LIST_FILE") local processed_images=0 local successful_images=0 local failed_images=() while IFS= read -r image_line; do # Skip empty lines and comments [[ -z "$image_line" || "$image_line" =~ ^[[:space:]]*# ]] && continue # Remove leading/trailing whitespace image_line=$(echo "$image_line" | sed 's/^[[:space:]]*//;s/[[:space:]]*$//') [[ -z "$image_line" ]] && continue echo $image_line processed_images=$((processed_images + 1)) log "[$processed_images/$total_images] Processing: $image_line" if pull_and_push_multiarch "$image_line" "$TARGET_REGISTRY"; then successful_images=$((successful_images + 1)) log "✓ Success: $image_line" else failed_images+=("$image_line") log "✗ Failed: $image_line" fi log "Progress: $processed_images/$total_images completed" echo "----------------------------------------" done <"$IMAGE_LIST_FILE" log "Final Summary:" log "Total images processed: $processed_images" log "Successful: $successful_images" log "Failed: ${#failed_images[@]}" if [[ ${#failed_images[@]} -gt 0 ]]; then log "Failed images:" printf ' %s\n' "${failed_images[@]}" exit 1 fi log "🎉 All images processed successfully!" } main "$@" -
New: Migrate away from bitnami.
Bitnami is changing their pull policy making it unfeasible to use their images (More info here: 1, 2, 3), so there is the need to migrate to other image providers.
Which alternative to use
The migration can be done to the official maintained images (although this has some disadvantages) or to any of the common docker image builders:
- https://github.com/home-operations/containers/
- https://github.com/linuxserver
- https://github.com/11notes
There is an effort to build a fork of bitnami images but it has not yet much inertia.
Regarding the alternatives of helm charts, a quick look shown this one.
Infrastructure archeology
First you need to know which images are you using, to do that you can:
- Clone al git repositories of a series of organisations and do a local grep
- Search all the container images in use in kubernetes that match a desired string
- Recursively pull a copy of all helm charts used by an argocd repository and then do a grep.
Create a local copy of the images
It's wise to make a copy of the used images in your local registry to be able to pull the dockers once bitnami does not longer let you.
To do that you can save the used images in a
bitnami-images.txtfile and run the next script:set -euo pipefail SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" IMAGE_LIST_FILE="${1:-${SCRIPT_DIR}/bitnami-images.txt}" TARGET_REGISTRY="${2:-}" if [[ -z "$TARGET_REGISTRY" ]]; then echo "Usage: $0 <image_list_file> <target_registry>" echo "Example: $0 bitnami-images.txt registry.cloud.icij.org" exit 1 fi if [[ ! -f "$IMAGE_LIST_FILE" ]]; then echo "Error: Image list file '$IMAGE_LIST_FILE' not found" exit 1 fi log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" } extract_image_name_and_tag() { local full_image="$1" # Remove registry prefix to get org/repo:tag # Examples: # docker.io/bitnami/discourse:3.4.7 -> bitnami/discourse:3.4.7 # registry-1.docker.io/bitnami/os-shell:11-debian-11-r95 -> bitnami/os-shell:11-debian-11-r95 # registry-proxy.internal.cloud.icij.org/bitnami/minideb:stretch -> bitnami/minideb:stretch if [[ "$full_image" =~ ^[^/]*\.[^/]+/ ]]; then # Contains registry with dot - remove everything up to first / echo "${full_image#*/}" else # No registry prefix echo "$full_image" fi } pull_and_push_multiarch() { local source_image="$1" local target_registry="$2" local image_name_with_tag image_name_with_tag=$(extract_image_name_and_tag "$source_image") local target_image="${target_registry}/${image_name_with_tag}" log "Processing: $source_image -> $target_image" local pushed_images=() local architectures=("linux/amd64" "linux/arm64") local arch_suffixes=("amd64" "arm64") # Try to pull and push each architecture for i in "${!architectures[@]}"; do local platform="${architectures[$i]}" local arch_suffix="${arch_suffixes[$i]}" log "Attempting to pull ${platform} image: $source_image" if sudo docker pull --platform "$platform" "$source_image" 2>/dev/null; then log "Successfully pulled ${platform} image" # Tag with architecture-specific tag for manifest creation local arch_specific_tag="${target_image}-${arch_suffix}" sudo docker tag "$source_image" "$arch_specific_tag" log "Pushing ${platform} image as ${arch_specific_tag}" if sudo docker push "$arch_specific_tag"; then log "Successfully pushed ${platform} image" pushed_images+=("$arch_specific_tag") else log "Failed to push ${platform} image" sudo docker rmi "$arch_specific_tag" 2>/dev/null || true fi else log "⚠️ ${platform} image not available for $source_image - skipping" fi done if [[ ${#pushed_images[@]} -eq 0 ]]; then log "❌ No images were successfully pushed for $source_image" return 1 fi # Create the main tag with proper multi-arch manifest if [[ ${#pushed_images[@]} -gt 1 ]]; then log "Creating multi-arch manifest for $target_image" # Remove any existing manifest (in case of retry) sudo docker manifest rm "$target_image" 2>/dev/null || true if sudo docker manifest create "$target_image" "${pushed_images[@]}"; then # Annotate each architecture in the manifest for i in "${!pushed_images[@]}"; do local arch_tag="${pushed_images[$i]}" local arch="${arch_suffixes[$i]}" sudo docker manifest annotate "$target_image" "$arch_tag" --arch "$arch" --os linux done log "Pushing multi-arch manifest to $target_image" if sudo docker manifest push "$target_image"; then log "✅ Successfully pushed multi-arch image: $target_image" else log "❌ Failed to push manifest for $target_image" return 1 fi else log "❌ Failed to create manifest for $target_image" return 1 fi else # Only one architecture - tag and push directly log "Single architecture available, pushing as $target_image" sudo docker tag "${pushed_images[0]}" "$target_image" if sudo docker push "$target_image"; then log "✅ Successfully pushed single-arch image: $target_image" else log "❌ Failed to push $target_image" return 1 fi fi # Clean up local images to save space sudo docker rmi "$source_image" "${pushed_images[@]}" 2>/dev/null || true if [[ ${#pushed_images[@]} -eq 1 ]]; then sudo docker rmi "$target_image" 2>/dev/null || true fi return 0 } main() { log "Starting multi-architecture image pull and push" log "Source list: $IMAGE_LIST_FILE" log "Target registry: $TARGET_REGISTRY" # Enable experimental CLI features for manifest commands export DOCKER_CLI_EXPERIMENTAL=enabled total_images=$(wc -l "$IMAGE_LIST_FILE") local processed_images=0 local successful_images=0 local failed_images=() while IFS= read -r image_line; do # Skip empty lines and comments [[ -z "$image_line" || "$image_line" =~ ^[[:space:]]*# ]] && continue # Remove leading/trailing whitespace image_line=$(echo "$image_line" | sed 's/^[[:space:]]*//;s/[[:space:]]*$//') [[ -z "$image_line" ]] && continue echo $image_line processed_images=$((processed_images + 1)) log "[$processed_images/$total_images] Processing: $image_line" if pull_and_push_multiarch "$image_line" "$TARGET_REGISTRY"; then successful_images=$((successful_images + 1)) log "✓ Success: $image_line" else failed_images+=("$image_line") log "✗ Failed: $image_line" fi log "Progress: $processed_images/$total_images completed" echo "----------------------------------------" done <"$IMAGE_LIST_FILE" log "Final Summary:" log "Total images processed: $processed_images" log "Successful: $successful_images" log "Failed: ${#failed_images[@]}" if [[ ${#failed_images[@]} -gt 0 ]]; then log "Failed images:" printf ' %s\n' "${failed_images[@]}" exit 1 fi log "🎉 All images processed successfully!" } main "$@"Replace a bitnami image with a local one
If you for some reason need to pull
bitnami/discourse:3.4.7and get an error you need instead to pull it from{your_registry}/bitnami/discourse:3.4.7. If you want to pull a specific architecture you can append it at the end ({your_registry}/bitnami/discourse:3.4.7-amd64.If you need to do the changes in an argocd managed kubernetes application search in the
values.yamlorvalues-{environment}.yamlfiles for theimage:string. If it's not defined you may need to look at the helm chart definition. To do that open theChart.yamlfile to find the chart and the version used. For example:--- apiVersion: v2 name: discourse version: 1.0.0 dependencies: - name: discourse version: 12.6.2 repository: https://charts.bitnami.com/bitnamiYou can pull a local copy of the chart with:
- If the chart is using an
ociurl:
helm pull oci://registry-1.docker.io/bitnamicharts/postgresql --version 8.10.X --untar -d postgres8- If it's using an
httpsurl:
helm pull cost-analyzer --repo https://kubecost.github.io/cost-analyzer/ --version 2.7.2And inspect the
values.yamlfile and all the templates until you find which key value you need to add.feat(docker#Cannot invoke "jdk.internal.platform.CgroupInfo.getMountPoint()" because "anyController" is null) : Cannot invoke "jdk.internal.platform.CgroupInfo.getMountPoint()" because "anyController" is null
It's caused because the docker is not able to access the cgroups, this can be caused by a docker using the legacy cgroups v1 while the linux kernel (>6.12) is using the v2.
The best way to fix it is to upgrade the docker to use v2, if you can't you need to force the system to use the v1. To do that:
- Edit
/etc/default/grubto add the configurationGRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=0" - Then update GRUB:
sudo update-grub - Reboot
-
New: Clone al git repositories of a series of organisations.
It assumes you have
teaconfigured to interact with the desired gitea instance.set -e ORGANIZATIONS=("ansible-playbooks" "ansible-roles") clone_org_repos() { local page=1 local has_more=true while [ "$has_more" = true ]; do echo "Fetching page $page..." local csv_output csv_output=$(tea repo ls --output csv --page "$page" 2>/dev/null || true) if [ -z "$csv_output" ] || [ "$csv_output" = '"owner","name","type","ssh"' ] || [ "$(echo "$csv_output" | wc -l)" -lt 3 ]; then echo "No more repositories found on page $page" has_more=false break fi local repo_count=0 while IFS=',' read -r owner name type ssh_url; do if [ "$owner" = '"owner"' ]; then continue fi owner=$(echo "$owner" | sed 's/"//g') name=$(echo "$name" | sed 's/"//g') ssh_url=$(echo "$ssh_url" | sed 's/"//g') # echo "owner: $owner name: $name ssh_url: $ssh_url" if [[ -n "$name" ]] && [[ -n "$ssh_url" ]] && [[ "${ORGANIZATIONS[*]}" =~ $owner ]]; then echo "Cloning repository: $name" if [ ! -d "$name" ]; then git clone "$ssh_url" "$owner/$name" || { echo "Failed to clone $name, skipping..." continue } else echo "Repository $name already exists, skipping..." fi repo_count=$((repo_count + 1)) fi done <<<"$csv_output" ((page++)) done cd .. echo "Finished processing $org" echo } main() { echo "Starting repository cloning process..." echo "Target organizations: ${ORGANIZATIONS[*]}" echo if ! command -v tea &>/dev/null; then echo "Error: 'tea' command not found. Please install gitea tea CLI." exit 1 fi if ! command -v git &>/dev/null; then echo "Error: 'git' command not found. Please install git." exit 1 fi for org in "${ORGANIZATIONS[@]}"; do if [ ! -d "$org" ]; then mkdir "$org" fi done clone_org_repos echo "Repository cloning process completed!" echo "Check the following directories:" for org in "${ORGANIZATIONS[@]}"; do if [ -d "$org" ]; then echo " - $org/ ($(find "$org" -maxdepth 1 -type d | wc -l) repositories)" fi done } main "$@"
File management configuration⚑
-
New: Introduce radicle.
Radicle is an open source, peer-to-peer code collaboration stack built on Git. Unlike centralized code hosting platforms, there is no single entity controlling the network. Repositories are replicated across peers in a decentralized manner, and users are in full control of their data and workflow.
-
New: Sum up the notes of the vim plugin development tutorial.
For the repository name, plugins usually finish with the
.nvimextension. I'm going to call mineorg-misc.nvim.Let's go with the second, add to your
init.luathe next code:print("Hello from our plugin")We can have the code of our extension wherever we want in our filesystem, but we need to tell Neovim where our plugin’s code is, so it can load the files correctly. Since I use lazy.nvim this is the way to load a plugin from a local folder:
{ dir = "~/projects/org-misc", -- Your path }Now if you restart your neovim you won't see anything until you load it with
:lua require "org-misc"you'll see the messageHello from our pluginin the command line.To automatically load the plugin when you open nvim, use the next lazy config:
{ dir = "~/projects/org-misc", -- Your path config = function() require "org-misc" end }Usually
init.luastarts with:local M = {} M.setup = function () -- nothing yet end return MWhere: -
Mstands for module, and we'll start adding it methods. -M.setupwill be the method we use to configure the plugin.Let's start with a basic functionality to print some slides:
local M = {} M.setup = function() -- nothing yet end ---@class present.Slides ---@fields slides string[]: The slides of the file --- Takes some lines and parses them --- @param lines string --- @return present.Slides local parse_slides = function(lines) local slides = { slides = {} } for _, line in ipairs(lines) do print(line) end return slides end print(parse_slides({ "# Hello", "this is something else", "# world", "this is something else", })) return MYou can run the code in the current buffer with
:%lua. For quick access, I've defined the next binding:keymap.set("n", "<leader>X", ":%lua<cr>", {desc = "Run the lua code in the current buffer"})The
print(parse_slides..part it's temporal code so that you can debug your code easily. Once it's ready you'll remove themCall a method of a module
To run the method of a module:
local M = {} M.setup = function() -- nothing yet end return MYou can do
require('org-misc').setup()Set keymaps
Inside the code of the plugin
You can set keymaps into your plugins by using:
vim.keymap.set("n", "n", function() -- code end)The problem is that it will override the
nkey everywhere which is not a good idea, that's why we normally limit it to the current buffer.You can get the current buffer with
buffer = truevim.keymap.set("n", "n", function() -- code end, { buffer = true } )Continue till the end If you want to stop capturing the traffic flow and go to the end ignoring all breakpoints, remove all breakpoints and do
.cReload the plugin without exiting nvim If you are using lazy.nvim, there is a feature that lazy.nvim provides for this purpose:
Lazy reload your_plugin your_plugin2Neovim plugin testing
We're going to test it with
plenary. We'll add atestsdirectory at the root of our repository.Each of the test files need to end in
_spec.lua, so if we want to test aparse_linesit will be calledparse_lines_spec.lua.Each test file has the following structure
These are all the tests for thelocal clockin = require('org-misc').clockin describe("org-misc.clockin", function() it("should do clockin", function() assert.is.True(clock_in()) end) end)clockinmethod,Now you can run the test with
:PlenaryBustedFile %Configuring neotest to run the tests
Using
:PlenaryBustedFile %is not comfortable, that's why we're going to useneotestConfigure it with:
return { { "nvim-neotest/neotest", dependencies = { "nvim-neotest/neotest-plenary", }, config = function() require("neotest").setup({ adapters = { require("neotest-plenary"), }, }) end, }, }Now you can do:
<leader>tTto run all test files<leader>ttto run the whole file<leader>tlto run the last test<leader>toto show the output<leader>trto run the nearest<leader>tsto show the summary
Remove the Undefined global describe linter warnings
Add to the root of your repository a
.luarc.jsonfile with the next contents{ "$schema": "https://raw.githubusercontent.com/sumneko/vscode-lua/master/setting/schema.json", "diagnostics": { "globals": ["vim"] }, "hint": { "enable": true }, "runtime": { "path": ["?.lua", "?/init.lua"], "pathStrict": true, "version": "LuaJIT" }, "telemetry": { "enable": false }, "workspace": { "checkThirdParty": "Disable", "ignoreDir": [".git"], "library": [ "./lua", "$VIMRUNTIME/lua", "${3rd}/luv/library", "./tests/.deps/plugins/plenary" ] } }Testing internal functions
If you have a function
parse_linesin your module that you want to test, you can export it as an internal methodlocal parse_lines = function () -- code end M._parse_lines = parse_lines -
New: Control an existing nvim instance with dap.
Once you have all set up and assuming you're using the lazyvim keybindings for
nvim-dap:vim.api.nvim_set_keymap('n', '<leader>ds', [[:lua require"osv".launch({port = 8086})<CR>]], { noremap = true }) vim.api.nvim_set_keymap('n', '<leader>dq', [[:lua require"osv".stop()<CR>]], { noremap = true })You will debug the plugin by:
- Launch the server in the nvim instance where you're going to run the actions using
<leader>ds. - Open another Neovim instance with the source file (the debugger).
- Place breakpoint with
<leader>db. - On the debugger connect to the DAP client with
<leader>dc. - Optionally open the
nvim-dap-uiwith<leader>Bin the debugger. - Run your script/plugin in the debuggee
Now you can interact with the debugger in the window below the code. You have the next commands:
help: Show all commands<enter>: run the same action as the previous one. For example if you do.nand then<enter>it will run.nagain..nor.next: next step.bor.back: previous step (if the debugger supports it).cor.continue: Continue to the next breakpoint.
- Launch the server in the nvim instance where you're going to run the actions using
-
New: How to exclude some files from the search.
If anyone else comes here in the future and have the following setup
- Using
fdas default command:export FZF_DEFAULT_COMMAND='fd --type file --hidden --follow' - Using
:Rgto grep in files
And want to exclude a specific path in a git project say
path/to/exclude(but that should not be included in.gitignore) from bothfdandrgas used byfzf.vim, then the easiest way I found to solve to create ignore files for the respective tool then ignore this file in the local git clone (as they are only used by me)cd git_proj/ echo "path/to/exclude" > .rgignore echo "path/to/exclude" > .fdignore printf ".rgignore\n.fdignore" >> .git/info/exclude - Using
Plugin System⚑
- Correction: Write python plugins with entrypoints.
Python Snippets⚑
-
New: Convert a datetime into a date.
datetime.now().date() -
New: Download book previews from google books.
You will only get some of the pages but it can help in the ending pdf
This first script gets the images data:
import asyncio import os import json import re from urllib.parse import urlparse, parse_qs from playwright.async_api import async_playwright import aiohttp import aiofiles async def download_image(session, src, output_path): """Download image from URL and save to specified path""" try: headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; rv:128.0) Gecko/20100101 Firefox/128.0", "Accept": "*/*", "Accept-Language": "en-US,en;q=0.5", "Referer": "https://books.google.es/", "DNT": "1", "Sec-GPC": "1", "Connection": "keep-alive", } async with session.get(src, headers=headers) as response: response.raise_for_status() async with aiofiles.open(output_path, "wb") as f: await f.write(await response.read()) print(f"Downloaded: {output_path}") return True except Exception as e: print(f"Error downloading {src}: {e}") return False def extract_page_number(pid): """Extract numeric page number from page ID""" match = re.search(r"PA(\d+)", pid) if match: return int(match.group(1)) try: return int(pid.replace("PA", "").replace("PP", "")) except: return 9999 async def main(): # Create output directory output_dir = "book_images" os.makedirs(output_dir, exist_ok=True) # Keep track of all pages found seen_pids = set() page_counter = 0 download_tasks = [] # Create HTTP session for downloads async with aiohttp.ClientSession() as session: async with async_playwright() as p: browser = await p.firefox.launch(headless=False) context = await browser.new_context( user_agent="Mozilla/5.0 (Windows NT 10.0; rv:128.0) Gecko/20100101 Firefox/128.0" ) # Create a page and set up response handling page = await context.new_page() # Store seen URLs to avoid duplicates seen_urls = set() # Set up response handling for JSON data async def handle_response(response): nonlocal page_counter url = response.url # Only process URLs with jscmd=click3 if "jscmd=click3" in url and url not in seen_urls: try: # Try to parse as JSON json_data = await response.json() seen_urls.add(url) # Process and download page data immediately if "page" in json_data and isinstance(json_data["page"], list): for page_data in json_data["page"]: if "src" in page_data and "pid" in page_data: pid = page_data["pid"] if pid not in seen_pids: seen_pids.add(pid) src = page_data["src"] # Create filename with sequential numbering formatted_index = ( f"{int(pid.replace('PA', '')):03d}" ) output_file = os.path.join( output_dir, f"page-{formatted_index}.png" ) page_counter += 1 print( f"Found new page: {pid}, scheduling download" ) # Start download immediately task = asyncio.create_task( download_image(session, src, output_file) ) download_tasks.append(task) return len(seen_pids) except Exception as e: print(f"Error processing response from {url}: {e}") # Register response handler page.on("response", handle_response) # Navigate to the starting URL book_url = ( "https://books.google.es/books?id=412loEMJA9sC&lpg=PP1&hl=es&pg=PA5" ) await page.goto(book_url) # Wait for initial page load await page.wait_for_load_state("networkidle") # Scroll loop variables max_scroll_attempts = 500 # Safety limit scroll_count = 0 pages_before_scroll = 0 consecutive_no_new_pages = 0 # Continue scrolling until we find no new pages for several consecutive attempts while scroll_count < max_scroll_attempts and consecutive_no_new_pages < 5: # Get current page count before scrolling pages_before_scroll = len(seen_pids) # Use PageDown key to scroll await page.keyboard.press("PageDown") scroll_count += 1 # Wait for network activity await asyncio.sleep(2) # Check if we found new pages after scrolling if len(seen_pids) > pages_before_scroll: consecutive_no_new_pages = 0 print( f"Scroll {scroll_count}: Found {len(seen_pids) - pages_before_scroll} new pages" ) else: consecutive_no_new_pages += 1 print( f"Scroll {scroll_count}: No new pages found ({consecutive_no_new_pages}/5)" ) print(f"Scrolling complete. Found {len(seen_pids)} pages total.") await browser.close() # Wait for any remaining downloads to complete if download_tasks: print(f"Waiting for {len(download_tasks)} downloads to complete...") await asyncio.gather(*download_tasks) print(f"Download complete! Downloaded {page_counter} images.") if __name__ == "__main__": asyncio.run(main()) -
New: Send keystrokes to an active window.
import subprocess subprocess.run(['xdotool', 'type', 'Hello world!']) subprocess.run(['xdotool', 'key', 'Return']) # press enter subprocess.run(['xdotool', 'key', 'ctrl+c']) window_id = subprocess.check_output(['xdotool', 'getactivewindow']).decode().strip() subprocess.run(['xdotool', 'windowactivate', window_id]) -
New: Make temporal file.
import tempfile with tempfile.NamedTemporaryFile( suffix=".tmp", mode="w+", encoding="utf-8" ) as temp: temp.write( "# Enter commit message body. Lines starting with '#' will be ignored.\n" ) temp.write("# Leave file empty to skip the body.\n") temp.flush() subprocess.call([editor, temp.name]) temp.seek(0) lines = temp.readlines() -
New: Remove a directory with content.
import shutil shutil.rmtree(Path('/path/to/directory')) -
New: Recursively find files if you only want the files and directories of the first level.
from pathlib import Path path = Path("/your/directory") for item in path.iterdir(): if item.is_file(): print(f"File: {item.name}") elif item.is_dir(): print(f"Directory: {item.name}")feat(helm#Download a chart): Download a chart
If the chart is using an
ociurl:helm pull oci://registry-1.docker.io/bitnamicharts/postgresql --version 8.10.X --untar -d postgres8If it's using an
httpsurl:helm pull cost-analyzer --repo https://kubecost.github.io/cost-analyzer/ --version 2.7.2 -
New: Get value of enum by value.
from enum import Enum class Color(Enum): RED = 1 GREEN = 2 BLUE = 3Sometimes it’s useful to access members in enumerations programmatically (i.e. situations where Color.RED won’t do because the exact color is not known at program-writing time). Enum allows such access:
Color(1) <Color.RED: 1> Color(3) <Color.BLUE: 3>If you want to access enum members by name, use item access:
Color['RED'] <Color.RED: 1> Color['GREEN'] <Color.GREEN: 2>```
-
New: Get the directory of a python script.
This can be useful to create relative paths between the script parts in a way that you can still run the script from another directory.
from pathlib import Path script_dir = Path(__file__).parent script_path = script_dir / '../another-file'
GitPython⚑
-
New: Checking out an existing branch.
heads = repo.heads develop = heads.develop repo.head.reference = develop
Pandas⚑
-
New: Try FireDuck!.
NOTE: you might as well use FireDucks as it has the same API interface and is waaay faster. The good thing is that you only need to add to the top of your code
import fireducks.pandas as pdand everything should work (I haven't tried myself).
Elasticsearch⚑
-
New: Delete documents from all indices in an elasticsearch cluster.
ES_HOST="${1:-http://localhost:9200}" DEFAULT_SETTING="5" # Target default value (5%) INDICES=$(curl -s -XGET "$ES_HOST/_cat/indices?h=index") for INDEX in $INDICES; do echo "Processing index: $INDEX" # Close the index to modify static settings curl -s -XPOST "$ES_HOST/$INDEX/_close" > /dev/null # Update expunge_deletes_allowed to 1% curl -s -XPUT "$ES_HOST/$INDEX/_settings" -H 'Content-Type: application/json' -d' { "index.merge.policy.expunge_deletes_allowed": "0" }' > /dev/null # Reopen the index curl -s -XPOST "$ES_HOST/$INDEX/_open" > /dev/null # Trigger forcemerge (async) # curl -s -XPOST "$ES_HOST/$INDEX/_forcemerge?only_expunge_deletes=true&wait_for_completion=false" > /dev/null echo "Forcemerge triggered for $INDEX" curl -s -XPOST "$ES_HOST/$INDEX/_forcemerge?only_expunge_deletes=true" > /dev/null & echo "Waiting until all forcemerge tasks are done" while curl -s $ES_HOST/_cat/tasks\?v | grep forcemerge > /dev/null ; do curl -s $ES_HOST/_cat/indices | grep $INDEX sleep 10 done # Close the index again curl -s -XPOST "$ES_HOST/$INDEX/_close" > /dev/null # Update to the new default (5%) curl -s -XPUT "$ES_HOST/$INDEX/_settings" -H 'Content-Type: application/json' -d' { "index.merge.policy.expunge_deletes_allowed": "'"$DEFAULT_SETTING"'" }' > /dev/null # Reopen the index curl -s -XPOST "$ES_HOST/$INDEX/_open" > /dev/null done echo "Done! All indices updated."
Streamlit⚑
-
New: Show a spinner while the data is loading.
st.title("Title") with st.spinner("Loading data..."): data = long_process() st.markdown('content shown once the data is loaded') -
New: Deploy in docker.
Here's an example
Dockerfilethat you can add to the root of your directoryFROM python:3.11-slim WORKDIR /app RUN apt-get update && apt-get install -y \ build-essential \ curl \ software-properties-common \ && rm -rf /var/lib/apt/lists/* COPY . . RUN pip3 install -r requirements.txt EXPOSE 8501 HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health ENTRYPOINT ["streamlit", "run", "app.py", "--server.port=8501", "--server.address=0.0.0.0"]While you debug you may want to replace the
COPY . .to:COPY requirements.txt . RUN pip3 install -r requirements.txt COPY app.py .So that the build iterations are faster.
You can build it with
docker build -t streamlit ., then test it withdocker run -p 8501:8501 streamlitOnce you know it's working you can create a docker compose
services: streamlit: image: hm2025_nodos container_name: my_app env_file: - .env ports: - "8501:8501" healthcheck: test: ["CMD", "curl", "--fail", "http://localhost:8501/_stcore/health"] interval: 30s timeout: 10s retries: 5If you use swag from linuxserver you can expose the service with the next nginx configuration:
server { listen 443 ssl; listen [::]:443 ssl; server_name my_app.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_streamlit my_container_name; proxy_pass http://$upstream_streamlit:8501; } }And if you save your
docker-compose.yamlfile into/srv/streamlityou can use the following systemd service to automatically start it on boot.``ini [Unit] Description=my_app Requires=docker.service After=docker.service
[Service] Restart=always User=root Group=docker WorkingDirectory=/srv/streamlit TimeoutStartSec=100 RestartSec=2s ExecStart=/usr/bin/docker compose -f docker-compose.yaml up ExecStop=/usr/bin/docker compose -f docker-compose.yaml down
[Install] WantedBy=multi-user.target ```
DevSecOps⚑
Velero⚑
-
New: Difference between sync and refresh.
- Sync: Reconciles the current cluster state with the target state in git.
- Refresh: Fetches the latest manifests from git and compares the diff with the live state.
- Hard Refresh: Clears any caches and does a refresh.
-
New: Add api and library docs.
There is a python library
-
New: ArgoCD commandline installation.
curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64 install -m 555 argocd-linux-amd64 ~/.local/bin/argocd rm argocd-linux-amd64 -
New: ArgoCD commandline usage.
The
argocd logincommand is the first step in interacting with the Argo CD API. This command allows you to authenticate yourself, setting up a secure connection between your terminal and the Argo CD server. You’ll need to provide your server’s URL and your credentials. There are three different ways to login, I found that the--coreis the most useful as it will use your kubernetes credentials.argocd login your.argocd.url.com --core --name productionBe careful thought that you can't set different
argocd contextfor different clusters using the--coreeven though you set the--kube-contextflag. The config file~/.config/argocd/configshows that it's using whatever kubernetes context you're using. So be careful that you're applying it in the correct one!Set an argocd context
The
argocd contextcommand is used to manage your Argo CD contexts. A context is a configuration that represents a Kubernetes cluster, user, and namespace. You can use this command to switch Argo CD between different contexts, allowing you to manage multiple Kubernetes namespaces and clusters from a single terminal.You can see the different contexts with
argocd contextGet the list of applications
argocd app listRefresh an application
argocd app get app_name --refreshShow the diff of an application
argocd app diff app_nameSync an application
argocd app sync app_name -
New: More not there yet features.
- Python library: I have found none
- Argocd TUI: I have found none that is updated
-
New: Optimizing Kubernetes Cluster Node Count: A Strategic Approach.
Reducing the number of nodes in a Kubernetes cluster is a critical strategy for controlling cloud infrastructure costs without compromising system reliability. Here are key best practices to help organizations right-size their Kubernetes deployments:
1. Availability Zone Consolidation
Carefully evaluate the number of availability zones (AZs) used in your cluster. While multi-AZ deployments provide redundancy, using too many zones can: - Increase infrastructure complexity - Raise management overhead - Unnecessarily distribute resources - Increase cost without proportional benefit
Recommendation: Aim for a balanced approach, typically 3 AZs, which provides robust redundancy while allowing more efficient resource consolidation.
2. Intelligent Node Sizing and Management
Implement sophisticated node management strategies:
Node Provisioning Optimization - Use tools like Karpenter to dynamically manage node sizing - Continuously analyze and adjust node types based on actual workload requirements - Consolidate smaller nodes into fewer, more efficiently sized instances
Overhead Calculation Regularly assess system and Kubernetes overhead: - Calculate total system resource consumption - Identify underutilized resources - Understand the overhead percentage for different node types - Make data-driven decisions about node scaling
3. Advanced Pod Autoscaling Techniques
Horizontal Pod Autoscaling (HPA) - Implement HPA for workloads with variable load - Automatically adjust pod count based on CPU/memory utilization - Ensure efficient resource distribution across existing nodes
Vertical Pod Autoscaling (VPA) - Use VPA in recommendation mode initially - Carefully evaluate automated resource adjustments - Manually apply recommendations to prevent potential service disruptions
4. Workload Optimization Strategies
High Availability Considerations - Ensure critical workloads have robust high availability configurations - Design applications to tolerate node failures gracefully - Implement pod disruption budgets to maintain service reliability
Resource Right-Sizing - Conduct thorough analysis of actual resource utilization - Avoid over-provisioning by matching resource requests to actual usage - Use monitoring tools to gain insights into workload characteristics
5. Continuous Monitoring and Refinement
- Implement comprehensive monitoring of cluster performance
- Regularly review node utilization metrics
- Create feedback loops for continuous optimization
- Develop scripts or use tools to collect and analyze resource usage data
-
New: Upgrading.
You can no longer use the latest tag as it can lead to unintentional updates and potentially broken setups.
The tag will not be removed, however it will also not be updated past
2025.2.They strongly recommend the use of a specific version tag for authentik instances' container images like
:2025.2. -
New: Get the values of a chart.
helm show values zammad --repo https://zammad.github.io/zammad-helm --version 14.0.1 -
New: Timestamp Issues ("entry too far behind").
The most frequent error shows log entries being rejected because their timestamps are too old.
This suggests either:
- Clock synchronization issues between your log sources and Loki
- Delayed log shipping/buffering
- Replay of old logs
To solve this:
- Check that your hosts' clocks are sync
- Adjust Loki's ingestion window in your config:
limits_config: reject_old_samples: true reject_old_samples_max_age: 168h # Increase from default (usually 1h)
You can also prune the logs. For example in the case of a docker container (named
dawarich_app) you can:sudo truncate -s 0 /var/lib/docker/containers/$(docker inspect -f '{{.Id}}' dawarich_app)/$(docker inspect -f '{{.Id}}' dawarich_app)-json.log -
New: Ingestion rate limit exceeded for user.
Increase rate limits in Loki config:
limits_config: ingestion_rate_mb: 8 # Increase from 4MB default ingestion_burst_size_mb: 16Also check which logs are triggering this rate limit because it may be the case that the amount of logs is too great due to an error.
-
New: Recursively pull a copy of all helm charts used by an argocd repository.
Including the dependencies of the dependencies.
import argparse import logging import subprocess import sys from pathlib import Path from typing import Dict, List, Set import yaml class HelmChartPuller: def __init__(self): self.pulled_charts: Set[str] = set() self.setup_logging() def setup_logging(self): logging.basicConfig( level=logging.INFO, format="[%(asctime)s] %(levelname)s: %(message)s", datefmt="%Y-%m-%d %H:%M:%S", ) self.logger = logging.getLogger(__name__) def parse_chart_yaml(self, chart_file: Path) -> Dict: """Parse Chart.yaml file and return its contents.""" try: with open(chart_file, "r", encoding="utf-8") as f: return yaml.safe_load(f) or {} except Exception as e: self.logger.error(f"Failed to parse {chart_file}: {e}") return {} def get_dependencies(self, chart_data: Dict) -> List[Dict]: """Extract dependencies from chart data.""" return chart_data.get("dependencies", []) def is_chart_pulled(self, name: str, version: str) -> bool: """Check if chart has already been pulled.""" chart_id = f"{name}-{version}" return chart_id in self.pulled_charts def mark_chart_pulled(self, name: str, version: str): """Mark chart as pulled to avoid duplicates.""" chart_id = f"{name}-{version}" self.pulled_charts.add(chart_id) def pull_chart(self, name: str, version: str, repository: str) -> bool: """Pull a Helm chart using appropriate method (OCI or traditional).""" if self.is_chart_pulled(name, version): self.logger.info(f"Chart {name}-{version} already pulled, skipping") return True self.logger.info(f"Pulling chart: {name} version {version} from {repository}") try: if repository.startswith("oci://"): oci_url = f"{repository}/{name}" cmd = ["helm", "pull", oci_url, "--version", version, "--untar"] else: cmd = [ "helm", "pull", name, "--repo", repository, "--version", version, "--untar", ] result = subprocess.run(cmd, capture_output=True, text=True, check=True) self.logger.info(f"Successfully pulled chart: {name}-{version}") self.mark_chart_pulled(name, version) return True except subprocess.CalledProcessError as e: self.logger.error(f"Failed to pull chart {name}-{version}: {e.stderr}") return False def process_chart_dependencies(self, chart_file: Path): """Process dependencies from a Chart.yaml file recursively.""" self.logger.info(f"Processing dependencies from: {chart_file}") chart_data = self.parse_chart_yaml(chart_file) if not chart_data: return dependencies = self.get_dependencies(chart_data) if not dependencies: self.logger.info(f"No dependencies found in {chart_file}") return for dep in dependencies: name = dep.get("name", "") version = dep.get("version", "") repository = dep.get("repository", "") if not all([name, version, repository]): self.logger.warning(f"Incomplete dependency in {chart_file}: {dep}") continue if self.pull_chart(name, version, repository): pulled_chart_dir = Path.cwd() / name if pulled_chart_dir.is_dir(): dep_chart_file = pulled_chart_dir / "Chart.yaml" if dep_chart_file.is_file(): self.logger.info( f"Found Chart.yaml in pulled dependency: {dep_chart_file}" ) self.process_chart_dependencies(dep_chart_file) def find_chart_files(self, search_dir: Path) -> List[Path]: """Find all Chart.yaml files in the given directory.""" self.logger.info(f"Searching for Chart.yaml files in: {search_dir}") return list(search_dir.rglob("Chart.yaml")) def check_dependencies(self): """Check if required dependencies are available.""" try: subprocess.run(["helm", "version"], capture_output=True, check=True) except (subprocess.CalledProcessError, FileNotFoundError): self.logger.error("helm command not found. Please install Helm.") sys.exit(1) try: import yaml except ImportError: self.logger.error( "PyYAML module not found. Install with: pip install PyYAML" ) sys.exit(1) def run(self, target_dir: str): """Main execution method.""" self.check_dependencies() target_path = Path(target_dir) if not target_path.is_dir(): self.logger.error(f"Directory '{target_dir}' does not exist") sys.exit(1) self.logger.info(f"Starting to process Helm charts in: {target_path}") self.logger.info(f"Charts will be pulled to current directory: {Path.cwd()}") chart_files = self.find_chart_files(target_path) if not chart_files: self.logger.info("No Chart.yaml files found") return for chart_file in chart_files: self.logger.info(f"Found Chart.yaml: {chart_file}") self.process_chart_dependencies(chart_file) self.logger.info( f"Completed processing. Total unique charts pulled: {len(self.pulled_charts)}" ) if self.pulled_charts: self.logger.info(f"Pulled charts: {', '.join(sorted(self.pulled_charts))}") def main(): parser = argparse.ArgumentParser( description="Recursively pull Helm charts and their dependencies from Chart.yaml files" ) parser.add_argument("directory", help="Directory to search for Chart.yaml files") args = parser.parse_args() puller = HelmChartPuller() puller.run(args.directory) if __name__ == "__main__": main() -
kubectl get nodes -l kubernetes.io/arch=arm64 -o jsonpath='{.items[*].metadata.name}' | xargs kubectl cordon -
New: Search all the container images in use that match a desired string.
set -e log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" >&2 } usage() { echo "Usage: $0" echo "Describes all pods in all namespaces and greps for images containing 'bitnami'" exit 1 } check_dependencies() { if ! command -v kubectl >/dev/null 2>&1; then log "Error: kubectl command not found" exit 1 fi # Test kubectl connectivity if ! kubectl cluster-info >/dev/null 2>&1; then log "Error: Cannot connect to Kubernetes cluster" exit 1 fi } find_bitnami_images() { log "Getting all pods from all namespaces..." # Get all pods from all namespaces and describe them kubectl get pods --all-namespaces -o wide --no-headers | while read -r namespace name ready status restarts age ip node nominated readiness; do log "Describing pod: $namespace/$name" # Describe the pod and grep for bitnami images description=$(kubectl describe pod "$name" -n "$namespace" 2>/dev/null) # Look for image lines containing bitnami bitnami_images=$(echo "$description" | grep -i "image:" | grep -i "bitnami" || true) if [[ -n "$bitnami_images" ]]; then echo "=========================================" echo "Pod: $namespace/$name" echo "Status: $status" echo "Bitnami Images Found:" echo "$bitnami_images" echo "=========================================" echo fi done } main() { if [[ $# -ne 0 ]]; then usage fi check_dependencies log "Starting search for Bitnami images in all pods across all namespaces" find_bitnami_images log "Search completed" } main "$@" -
New: Force the removal of a node from the cluster.
To force the removal of a node from a Kubernetes cluster, you have several options depending on your situation:
To prevent new pods from being scheduled while you prepare:
kubectl cordon <node-name>1. Graceful Node Removal (Recommended)
First, try the standard approach:
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data kubectl delete node <node-name>2. Force Removal When Node is Unresponsive
If the node is unresponsive or the graceful removal fails:
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data --force --grace-period=0 kubectl delete node <node-name>Immediate Forced Removal
For emergency situations where you need immediate removal:
kubectl delete node <node-name> --force --grace-period=0Common Drain Options
--ignore-daemonsets: Ignores DaemonSet pods (they'll be recreated anyway)--delete-emptydir-data: Deletes pods using emptyDir volumes--force: Forces deletion of pods not managed by controllers--grace-period=0: Immediately kills pods without waiting--timeout=300s: Sets timeout for the drain operation
-
New: Raise alert when value is empty.
Using vector(0)
One way to solve it is to use the
vector(0)operator with the operationor on() vector(0)(count_over_time({filename="/var/log/mail.log"} |= `Mail is sent` [24h]) or on() vector(0)) < 1Using unless
If you're doing an aggregation over a label this approach won't work because it will add a new time series with value 0. In those cases use a broader search that includes other logs and the
unlessoperator:(sum by(hostname) (count_over_time({job="systemd-journal"} [1h])) unless sum by(hostname) (count_over_time({service_name="watchtower"} [1d]))) > 0This will return a value > 0 for any hostname that has systemd-journal logs but no watchtower logs in the past day, which is perfect for alerting conditions.
-
New: Upgrade postgres.
Dump your database
Dump your existing database with a command similar to
docker compose exec postgresql pg_dump -U authentik -d authentik -cC > upgrade_backup_12.sql.Before continuing, ensure the SQL dump file
upgrade_backup_12.sqlincludes all your database content.Stop your application stack
Stop all services with
docker compose down.Backup your existing database
Move the directory where your data is to a new one:
mv /path/to/database /path/to/v12-backupModify your docker-compose.yml file
Update the PostgreSQL service image from
docker.io/library/postgres:12-alpinetodocker.io/library/postgres:17-alpine.Add
network_mode: noneand comment out anynetworkdirective to prevent connections being established to the database during the upgrade.Recreate the database container
Pull new images and re-create the PostgreSQL container:
docker compose pull && docker compose up --force-recreate -d postgresqlApply your backup to the new database:
cat upgrade_backup_12.sql | docker compose exec -T postgresql psql -U authentikRemove the network configuration setting
network_mode: nonethat you added to the Compose file in the previous step.Bring the service up
Start again the service with
docker compose upand see that everything is working as expected. -
To get the available backups you can use:
velero get backupsCheck the ones that have Failed in the name and then see the logs with:
velero backup logs backup-1h-20251215113154 | grep -v 'level=info'
vector⚑
-
New: Add the stats of the Enterprise Capacity seagate disk.
Specs IronWolf IronWolf Pro Exos 7E8 8TB Exos 7E10 8TB Exos X18 16TB Enterpri. Capacity Bays 1-8 1-24 ? ? ? ? Capacity 1-12TB 2-20TB 8TB 8TB 16 TB 10 TB RPM 5,400 RPM (3-6TB) 7200 RPM 7200 RPM 7200 RPM 7200 RPM 7200 RPM RPM 5,900 RPM (1-3TB) 7200 RPM 7200 RPM 7200 RPM 7200 RPM 7200 RPM RPM 7,200 RPM (8-12TB) 7200 RPM 7200 RPM 7200 RPM 7200 RPM 7200 RPM Speed 180MB/s (1-12TB) 214-260MB/s (4-18TB) 249 MB/s 255 MB/s 258 MB/s 254 MB/s Cache 64MB (1-4TB) 256 MB 256 MB 256 MB 256 MB 256 MB Cache 256MB (3-12TB) 256 MB 256 MB 256 MB 256 MB 256 MB Power Consumption 10.1 W 10.1 W 12.81 W 11.03 W 9.31 W 8 W Power Consumption Rest 7.8 W 7.8 W 7.64 W 7.06 W 5.08 W 4.5 W Workload 180TB/yr 300TB/yr 550TB/yr 550TB/yr 550TB/yr < 550TB/yr MTBF 1 million 1 million 2 millions 2 millions 2.5 millions 2.5 millions Noise idle ? ? ? ? ? 3.0 bels max Noise performance seek ? ? ? ? ? 3.4 bels max diff --git a/mkdocs.yml b/mkdocs.yml index 7d0201801c..ade48234b0 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -5,19 +5,18 @@ site_author: Lyz site_url: https://lyz-code.github.io/blue-book nav: - Introduction: index.md - - Projects: projects.md - Activism: - activism.md - Antifascism: - antifascism.md - Antifascist Actions: antifascist_actions.md - Hacktivism: - - Gatherings: - - hacktivist_gatherings.md - - Chaos Communication Congress: ccc.md - - Collectives: hacktivist_collectives.md - - Tools: - - Mobile Verification Toolkit: mobile_verification_toolkit.md + - Gatherings: + - hacktivist_gatherings.md + - Chaos Communication Congress: ccc.md + - Collectives: hacktivist_collectives.md + - Tools: + - Mobile Verification Toolkit: mobile_verification_toolkit.md - Anti-Colonialism: anticolonialism.md - Anti-Transphobia: antitransphobia.md - Anti-Racism: antiracism.md @@ -27,6 +26,7 @@ nav: - Feminism: - Privileges: feminism/privileges.md - Palestine: palestine.md + - Anarchism: anarchism.md - Memoria histórica: memoria_historica.md - Anti-Tourism: antitourism.md - Mentoring: mentoring.md @@ -36,153 +36,154 @@ nav: - Laboral: laboral.md - Collaborating tools: collaborating_tools.md - Conference organisation: conference_organisation.md - - Ludditest: luddites.md + - Ludditest: luddites.md - Life Management: - life_management.md - Time management: - - time_management.md - - Time management abstraction levels: time_management_abstraction_levels.md - - Action Management: action_management.md - - Roadmap Adjustment: - - roadmap_adjustment.md - - Strategy: strategy.md - - Systems Thinking: systems_thinking.md - - Roadmap Management Tools: - - roadmap_tools.md - - Org Mode: - - orgmode.md - - Org-rw: org_rw.md - - Orgzly: orgzly.md - - OpenProject: openproject.md - - Habit management: habit_management.md - - Interruption management: - - interruption_management.md - - Interruption Management Analysis: - - Work Interruption Analysis: work_interruption_analysis.md - - Personal Interruption Analysis: personal_interruption_analysis.md - - Week management: week_management.md - - Calendar management: - - calendar_management.md - - Calendar automation: - - vdirsyncer: vdirsyncer.md - - Calendar clients: - - Khal: khal.md - - Gancio: gancio.md - - Time management theories: - - Getting Things Done: gtd.md + - time_management.md + - Time management abstraction levels: time_management_abstraction_levels.md + - Action Management: action_management.md + - Roadmap Adjustment: + - roadmap_adjustment.md + - Strategy: strategy.md + - Systems Thinking: systems_thinking.md + - Roadmap Management Tools: + - roadmap_tools.md + - Org Mode: + - orgmode.md + - Org-rw: org_rw.md + - Orgzly: orgzly.md + - OpenProject: openproject.md + - Habit management: habit_management.md + - Interruption management: + - interruption_management.md + - Interruption Management Analysis: + - Work Interruption Analysis: work_interruption_analysis.md + - Personal Interruption Analysis: personal_interruption_analysis.md + - Week management: week_management.md + - Calendar management: + - calendar_management.md + - Calendar automation: + - vdirsyncer: vdirsyncer.md + - Calendar clients: + - Khal: khal.md + - Gancio: gancio.md + - Time management theories: + - Getting Things Done: gtd.md - Life chores management: - - Trip management: - - Route management: route_management.md - - Map management: map_management.md - - Food management: food_management.md - - Stock management: - - Grocy: grocy_management.md - - Money management: - - money_management.md - - beancount: - - beancount.md - - bean-sql: bean_sql.md - - Fava Dashboards: fava_dashboards.md - - Tools management: - - tool_management.md - - Email management: - - email_management.md - - Email automation: - - email_automation.md - - Email automation tools: - - mbsync: mbsync.md - - mirador: mirador.md - - afew: afew.md - - notmuch: notmuch.md - - Email automation libraries: - - mailbox: mailbox.md - - IMAP: - - IMAP library comparison: python_imap.md - - imap-tools: imap_tools.md - - Email clients: - - himalaya: himalaya.md - - alot: alot.md - - Email protocols: - - Maildir: maildir.md - - Instant Messages Management: - - instant_messages_management.md - - XMPP/Jabber: - - Dino: dino.md - - Gajim: gajim.md - - Profanity: profanity.md - - Matrix: - - matrix.md - - Matrix Highlight: matrix_highlight.md - - Rocketchat: rocketchat.md - - Computer configuration management: configuration_management.md + - Trip management: + - Route management: route_management.md + - Map management: map_management.md + - Food management: food_management.md + - Stock management: + - Grocy: grocy_management.md + - Money management: + - money_management.md + - beancount: + - beancount.md + - bean-sql: bean_sql.md + - Fava Dashboards: fava_dashboards.md + - Tools management: + - tool_management.md + - Email management: + - email_management.md + - Email automation: + - email_automation.md + - Email automation tools: + - mbsync: mbsync.md + - mirador: mirador.md + - afew: afew.md + - notmuch: notmuch.md + - Email automation libraries: + - mailbox: mailbox.md + - IMAP: + - IMAP library comparison: python_imap.md + - imap-tools: imap_tools.md + - Email clients: + - himalaya: himalaya.md + - alot: alot.md + - Email protocols: + - Maildir: maildir.md + - Instant Messages Management: + - instant_messages_management.md + - XMPP/Jabber: + - Dino: dino.md + - Gajim: gajim.md + - Profanity: profanity.md + - Matrix: + - matrix.md + - Matrix Highlight: matrix_highlight.md + - Signal: signal.md + - Rocketchat: rocketchat.md + - Computer configuration management: configuration_management.md - Content Management: - - Music Management: - - music_management.md - - MusicBrainz: musicbrainz.md - - Mopidy: mopidy.md - - Beets: beets.md - - Koel: koel.md - - yt-dlp: yt-dlp.md - - Book Management: - - book_management.md - - Bookwyrm: bookwyrm.md - - Movies Management: - - Jellyfin: jellyfin.md - - Ombi: ombi.md - - Mediatracker: mediatracker.md - - ffmpeg: ffmpeg.md - - transcoding: - - transcoding.md - - unmanic: unmanic.md - - Kodi: kodi.md - - News Management: - - news_management.md - - RSS: rss.md - - Wallabag: wallabag.md - - Photo management: - - photo_self_hosted.md - - Immich: immich.md - - Video management: video_management.md - - Videogames management: - - moonlight: moonlight.md - - retroarch: retroarch.md + - Music Management: + - music_management.md + - MusicBrainz: musicbrainz.md + - Mopidy: mopidy.md + - Beets: beets.md + - Koel: koel.md + - yt-dlp: yt-dlp.md + - Book Management: + - book_management.md + - Bookwyrm: bookwyrm.md + - Movies Management: + - Jellyfin: jellyfin.md + - Ombi: ombi.md + - Mediatracker: mediatracker.md + - ffmpeg: ffmpeg.md + - transcoding: + - transcoding.md + - unmanic: unmanic.md + - Kodi: kodi.md + - News Management: + - news_management.md + - RSS: rss.md + - Wallabag: wallabag.md + - Photo management: + - photo_self_hosted.md + - Immich: immich.md + - Video management: video_management.md + - Videogames management: + - moonlight: moonlight.md + - retroarch: retroarch.md - Knowledge Management: - - knowledge_management.md - - Spaced Repetition: - - spaced_repetition.md - - Anki: anki.md - - Mochi: mochi.md - - Analytical web reading: - - analytical_web_reading.md - - Hypothesis: linux/hypothesis.md - - Digital Gardens: - - digital_garden.md - - mkdocs: linux/mkdocs.md - - Build your own Digital Garden: writing/build_your_own_wiki.md - - Forking this garden: forking_this_wiki.md - - Aleph: aleph.md - - Wordpress: wordpress.md - - Relationship Management: - - relationship_management.md - - monica: linux/monica.md + - knowledge_management.md + - Spaced Repetition: + - spaced_repetition.md + - Anki: anki.md + - Mochi: mochi.md + - Analytical web reading: + - analytical_web_reading.md + - Hypothesis: linux/hypothesis.md + - Digital Gardens: + - digital_garden.md + - mkdocs: linux/mkdocs.md + - Build your own Digital Garden: writing/build_your_own_wiki.md + - Forking this garden: forking_this_wiki.md + - Aleph: aleph.md + - Wordpress: wordpress.md + - Relationship Management: + - relationship_management.md + - monica: linux/monica.md - Process Automation: - - process_automation.md - - AI: - - Open WebUI: openwebui.md - - NLP: - - spacy: spacy.md - - Virtual Assistant: virtual_assistant.md - - Monitor web changes: - - Changedetection.io: changedetection.md - - Monitorizar billetes de renfe: renfe.md + - process_automation.md + - AI: + - Open WebUI: openwebui.md + - NLP: + - spacy: spacy.md + - Virtual Assistant: virtual_assistant.md + - Monitor web changes: + - Changedetection.io: changedetection.md + - Monitorizar billetes de renfe: renfe.md - Torrent management: - - torrents.md - - qBittorrent: qbittorrent.md - - Rtorrent: rtorrent.md - - Unpackerr: unpackerr.md - - Life logging: - - life_logging.md - - ActivityWatch: activitywatch.md + - torrents.md + - qBittorrent: qbittorrent.md + - Rtorrent: rtorrent.md + - Unpackerr: unpackerr.md + - Life logging: + - life_logging.md + - ActivityWatch: activitywatch.md - Health: - Sleep: sleep.md - Teeth: @@ -213,7 +214,8 @@ nav: - Create the documentation repository: >- coding/python/python_project_template/python_docs.md - Load config from YAML: coding/python/python_config_yaml.md - - Configure SQLAlchemy to use the MariaDB/Mysql backend: >- + - Configure SQLAlchemy to use the MariaDB/Mysql backend: + >- coding/python/python_project_template/python_sqlalchemy_mariadb.md - Configure Docker to host the application: >- coding/python/python_project_template/python_docker.md @@ -269,9 +271,9 @@ nav: - Python Mysql: python_mysql.md - pythonping: pythonping.md - Python Prometheus: python-prometheus.md - - Python Telegram: - - python-telegram.md - - pytelegrambotapi: pytelegrambotapi.md + - Python Telegram: + - python-telegram.md + - pytelegrambotapi: pytelegrambotapi.md - Python VLC: python_vlc.md - Playwright: playwright.md - Plotly: coding/python/plotly.md @@ -299,13 +301,13 @@ nav: - Code Styling: coding/python/python_code_styling.md - Docstrings: coding/python/docstrings.md - Properties: python_properties.md - - Protocols: python_protocols.md + - Protocols: python_protocols.md - Package Management: - - python_package_management.md - - PDM: pdm.md - - pipx: pipx.md - - Pipenv: pipenv.md - - Poetry: python_poetry.md + - python_package_management.md + - PDM: pdm.md + - pipx: pipx.md + - Pipenv: pipenv.md + - Poetry: python_poetry.md - Lazy loading: lazy_loading.md - Plugin System: python_plugin_system.md - Profiling: python_profiling.md @@ -351,65 +353,66 @@ nav: - JWT: devops/jwt.md - React: coding/react/react.md - Coding tools: - - IDES: - - Vim: - - vim.md - - Vim configuration: - - vim_config.md - - Vim Keymaps: vim_keymaps.md - - Vim Package Manager: - - vim_plugin_managers.md - - LazyVim: lazyvim.md - - Packer: vim_packer.md - - UI management configuration: - - Vim foldings: vim_foldings.md - - Vim movement: vim_movement.md - - Tabs vs Buffers: vim_tabs.md - - File management configuration: - - NeoTree: neotree.md - - Telescope: telescope.md - - Editing specific configuration: - - vim_editor_plugins.md - - Vim formatters: vim_formatters.md - - Vim autocomplete: vim_completion.md - - Vim markdown: vim_markdown.md - - Vim spelling: vim_spelling.md - - Vim autosave: vim_autosave.md - - Coding specific configuration: - - vim_coding_plugins.md - - Treesitter: vim_treesitter.md - - LSP: vim_lsp.md - - Snippets: luasnip.md - - DAP: vim_dap.md - - Git management configuration: - - vim_git.md - - Diffview: diffview.md - - gitsigns: gitsigns.md - - Testing management configuration: vim_testing.md - - Email management: vim_email.md - - Other Vim Plugins: - - linux/vim/vim_plugins.md - - Vim Snippets: vim_snippets.md - - Vim Troubleshooting: vim_troubleshooting.md - - Neovim Plugin Development: vim_plugin_development.md - - Vi vs Vim vs Neovim: vim_vs_neovim.md - - Tridactyl: tridactyl.md - - VSCodium: vscodium.md - - Coding with AI: ai_coding.md - - Git: - - git.md - - Github cli: gh.md - - Forgejo: forgejo.md - - Gitea: gitea.md - - Data orchestrators: - - data_orchestrators.md - - Kestra: kestra.md - - memorious: memorious.md - - Scrapers: - - morph.io: morph_io.md - - ETL: - - Singer: singer.md - - Espanso: espanso.md + - IDES: + - Vim: + - vim.md + - Vim configuration: + - vim_config.md + - Vim Keymaps: vim_keymaps.md + - Vim Package Manager: + - vim_plugin_managers.md + - LazyVim: lazyvim.md + - Packer: vim_packer.md + - UI management configuration: + - Vim foldings: vim_foldings.md + - Vim movement: vim_movement.md + - Tabs vs Buffers: vim_tabs.md + - File management configuration: + - NeoTree: neotree.md + - Telescope: telescope.md + - Editing specific configuration: + - vim_editor_plugins.md + - Vim formatters: vim_formatters.md + - Vim autocomplete: vim_completion.md + - Vim markdown: vim_markdown.md + - Vim spelling: vim_spelling.md + - Vim autosave: vim_autosave.md + - Coding specific configuration: + - vim_coding_plugins.md + - Treesitter: vim_treesitter.md + - LSP: vim_lsp.md + - Snippets: luasnip.md + - DAP: vim_dap.md + - Git management configuration: + - vim_git.md + - Diffview: diffview.md + - gitsigns: gitsigns.md + - Testing management configuration: vim_testing.md + - Email management: vim_email.md + - Other Vim Plugins: + - linux/vim/vim_plugins.md + - Vim Snippets: vim_snippets.md + - Vim Troubleshooting: vim_troubleshooting.md + - Neovim Plugin Development: vim_plugin_development.md + - Vi vs Vim vs Neovim: vim_vs_neovim.md + - Tridactyl: tridactyl.md + - VSCodium: vscodium.md + - Coding with AI: ai_coding.md + - Git: + - git.md + - Github cli: gh.md + - Forgejo: forgejo.md + - Gitea: gitea.md + - Radicle: radicle.md + - Data orchestrators: + - data_orchestrators.md + - Kestra: kestra.md + - memorious: memorious.md + - Scrapers: + - morph.io: morph_io.md + - ETL: + - Singer: singer.md + - Espanso: espanso.md - Generic Coding Practices: - How to code: how_to_code.md - Program Versioning: @@ -435,8 +438,8 @@ nav: - Microservices: architecture/microservices.md - Restful APIS: architecture/restful_apis.md - OCR: - - Table parsing: - - Camelot: camelot.md + - Table parsing: + - Camelot: camelot.md - Frontend Development: frontend_development.md - Park programming: park_programming.md - Sponsor: sponsor.md @@ -453,8 +456,8 @@ nav: - Helmfile: devops/helmfile.md - Terraform: terraform.md - Ansible: - - Ansible Snippets: ansible_snippets.md - - Molecule: molecule.md + - Ansible Snippets: ansible_snippets.md + - Molecule: molecule.md - Nix: nix.md - Dotfiles: - dotfiles.md @@ -517,8 +520,8 @@ nav: - S3: devops/aws/s3.md - WAF: aws_waf.md - Databases: - - Redis: architecture/redis.md - - RabbitMQ: rabbitmq.md + - Redis: architecture/redis.md + - RabbitMQ: rabbitmq.md - Continuous Deployment: - ArgoCD: argocd.md - Continuous Integration: @@ -553,21 +556,21 @@ nav: - renovate: renovate.md - letsencrypt: letsencrypt.md - Threat modeling: - - Privacy threat modeling: privacy_threat_modeling.md + - Privacy threat modeling: privacy_threat_modeling.md - Storage: - - storage.md - - NAS: nas.md - - OpenZFS: - - linux/zfs.md - - OpenZFS storage planning: zfs_storage_planning.md - - Sanoid: sanoid.md - - ZFS Prometheus exporter: zfs_exporter.md - - Hard drive health: hard_drive_health.md - - Resilience: - - linux_resilience.md - - Memtest: memtest.md - - watchdog: watchdog.md - - Magic keys: magic_keys.md + - storage.md + - NAS: nas.md + - OpenZFS: + - linux/zfs.md + - OpenZFS storage planning: zfs_storage_planning.md + - Sanoid: sanoid.md + - ZFS Prometheus exporter: zfs_exporter.md + - Hard drive health: hard_drive_health.md + - Resilience: + - linux_resilience.md + - Memtest: memtest.md + - watchdog: watchdog.md + - Magic keys: magic_keys.md - Monitoring: - Monitoring Comparison: monitoring_comparison.md - Prometheus: @@ -587,21 +590,21 @@ nav: devops/prometheus/prometheus_troubleshooting.md - Grafana: grafana.md - Log analysis: - - Loki: - - loki.md - - Logcli: logcli.md - - Promtail: promtail.md - - Graylog: graylog.md - - Elastic Security: elastic_security.md + - Loki: + - loki.md + - Logcli: logcli.md + - Promtail: promtail.md + - Graylog: graylog.md + - Elastic Security: elastic_security.md - SIEM: siem.md - Databases: - - PostgreSQL: - - postgres.md - - Postgres operators: - - postgres_operators.md - - Zalando Postgres operator: zalando_postgres_operator.md - - elasticsearch: linux/elasticsearch.md - - Oracle Database: oracle_database.md + - PostgreSQL: + - postgres.md + - Postgres operators: + - postgres_operators.md + - Zalando Postgres operator: zalando_postgres_operator.md + - elasticsearch: linux/elasticsearch.md + - Oracle Database: oracle_database.md - Authentication: - Authentik: authentik.md - API Management: @@ -613,11 +616,11 @@ nav: - Refinement Template: refinement_template.md - Hardware: - CPU: cpu.md - - RAM: - - ram.md - - ECC RAM: - - ecc.md - - rasdaemon: rasdaemon.md + - RAM: + - ram.md + - ECC RAM: + - ecc.md + - rasdaemon: rasdaemon.md - Power Supply Unit: psu.md - GPU: gpu.md - Pedal PC: pedal_pc.md @@ -627,52 +630,54 @@ nav: - linux.md - Linux Snippets: linux_snippets.md - Distros: - - Libreelec: libreelec.md - - Tails: tails.md + - Libreelec: libreelec.md + - Tails: tails.md - Recovery tools: - - finnix: finnix.md + - finnix: finnix.md - Security tools: - - fail2ban: linux/fail2ban.md - - pass: pass.md - - Wireshark: wireshark.md + - fail2ban: linux/fail2ban.md + - pass: pass.md + - Wireshark: wireshark.md - Sysadmin tools: - - brew: linux/brew.md - - detox: detox.md - - Docker: docker.md - - Watchtower: watchtower.md - - Dynamic DNS: dynamicdns.md - - goaccess: goaccess.md - - Gotify: gotify.md - - HAProxy: linux/haproxy.md - - journald: journald.md - - LUKS: linux/luks/luks.md - - Outrun: outrun.md - - rm: linux/rm.md - - sed: sed.md - - Syncthing: linux/syncthing.md - - Tahoe-LAFS: tahoe.md - - Wake on Lan: wake_on_lan.md - - Wireguard: linux/wireguard.md - - yq: yq.md - - zip: linux/zip.md + - brew: linux/brew.md + - detox: detox.md + - Docker: docker.md + - Watchtower: watchtower.md + - Dynamic DNS: dynamicdns.md + - goaccess: goaccess.md + - Gotify: gotify.md + - HAProxy: linux/haproxy.md + - journald: journald.md + - LUKS: linux/luks/luks.md + - Outrun: outrun.md + - rm: linux/rm.md + - sed: sed.md + - Syncthing: linux/syncthing.md + - Tahoe-LAFS: tahoe.md + - Wake on Lan: wake_on_lan.md + - Wireguard: linux/wireguard.md + - yq: yq.md + - zip: linux/zip.md - Window manager tools: - - dunst: dunst.md - - ferdium: ferdium.md - - i3wm: i3wm.md - - rofi: rofi.md + - dunst: dunst.md + - ferdium: ferdium.md + - i3wm: i3wm.md + - rofi: rofi.md - User tools: - - Browsers: - - google chrome: linux/google_chrome.md - - Chromium: chromium.md - - Hushboard: husboard.md - - Peek: peek.md - - Terminals: - - terminal_comparison.md - - Alacritty: alacritty.md - - Wezterm: wezterm.md - - Kitty: kitty.md - - Instant messaging apps: - - Delta Chat: deltachat.md + - Browsers: + - google chrome: linux/google_chrome.md + - Chromium: chromium.md + - Hushboard: husboard.md + - Peek: peek.md + - Terminals: + - terminal_comparison.md + - Alacritty: alacritty.md + - Wezterm: wezterm.md + - Kitty: kitty.md + - Instant messaging apps: + - Delta Chat: deltachat.md + - Simplex Chat: simplexchat.md + - Android: - Android Tips: android_tips.md - OS: @@ -687,7 +692,6 @@ nav: - Orgzly: orgzly.md - OsmAnd: osmand.md - Seedvault: seedvault.md - - Signal: signal.md - Android SDK Platform tools: android_sdk.md - Arts: - Writing: @@ -703,8 +707,8 @@ nav: - Dancing: - Rave Dances: dancing/rave_dances.md - Swing: - - Shag: shag.md - - Lindy Hop: lindy.md + - Shag: shag.md + - Lindy Hop: lindy.md - Shuffle: - Basics: dancing/shuffle_basics.md - Kicks: dancing/shuffle_kicks.md @@ -729,7 +733,7 @@ nav: - board_games.md - Regicide: regicide.md - Music: - - Sister Rosetta Tharpe: sister_rosetta_tharpe.md + - Sister Rosetta Tharpe: sister_rosetta_tharpe.md - Sudokus: sudokus.md - Drawing: - drawing/drawing.md @@ -741,23 +745,24 @@ nav: - Emojis: emojis.md - Languages: - Castellano: castellano.md - - Galego: + - Galego: - galego.md - Diccionario galego-castelan: diccionario_galego.md + - Esperanto: esperanto.md - Science: - - Artificial Intelligence: - - ai.md - - Speech to text: - - Whisper: whisper.md - - Speech recognition: speech_recognition.md - - Coding by Voice: coding_by_voice.md - - Data Analysis: + - Artificial Intelligence: + - ai.md + - Speech to text: + - Whisper: whisper.md + - Speech recognition: speech_recognition.md + - Coding by Voice: coding_by_voice.md + - Data Analysis: - data_analysis.md - Recommender Systems: >- data_analysis/recommender_systems/recommender_systems.md - Parsers: parsers.md - CSV exploring: - - csvlens: csvlens.md + - csvlens: csvlens.md - Psychology: - psychology.md - The XY Problem: psychology/the_xy_problem.md @@ -765,6 +770,18 @@ nav: - Botany: - Trees: botany/trees.md - Math: math.md + - Reviews: + - Year reviews: year_reviews.md + - Content reviews: + - Books: books.md + # - Movies: movies.md + # - Music: music.md + # - TV Shows: tv_shows.md + - Podcasts: podcasts.md + # - Videogames: videogames.md + # - Boardgames: boardgames.md + # - Streaming channels: streaming_channels.md + - Projects: projects.md - Contact: contact.md
plugins: @@ -825,7 +842,7 @@ theme:
palette: # Light mode - - media: '(prefers-color-scheme: light)' + - media: "(prefers-color-scheme: light)" scheme: default primary: blue grey accent: light blue @@ -834,7 +851,7 @@ theme: name: Switch to dark mode
# Dark mode-
- media: '(prefers-color-scheme: dark)'
-
- media: "(prefers-color-scheme: dark)" scheme: slate primary: blue grey accent: light blue
-
-
New: Suggest to look at the slimbook.
I built a server pretty much the same as the slimbook.
-
New: Introduce smartctl.
Self-Monitoring, Analysis, and Reporting Technology (S.M.A.R.T. or SMART) is a monitoring system included in computer hard disk drives (HDDs) and solid-state drives (SSDs). Its primary function is to detect and report various indicators of drive reliability, or how long a drive can function while anticipating imminent hardware failures.
When S.M.A.R.T. data indicates a possible imminent drive failure, software running on the host system may notify the user so action can be taken to prevent data loss, and the failing drive can be replaced and no data is lost.
General information
A field study at Google covering over 100,000 consumer-grade drives from December 2005 to August 2006 found correlations between certain S.M.A.R.T. information and annualized failure rates:
- In the 60 days following the first uncorrectable error on a drive (S.M.A.R.T. attribute 0xC6 or 198) detected as a result of an offline scan, the drive was, on average, 39 times more likely to fail than a similar drive for which no such error occurred.
- First errors in reallocations, offline reallocations (S.M.A.R.T. attributes 0xC4 and 0x05 or 196 and 5) and probational counts (S.M.A.R.T. attribute 0xC5 or 197) were also strongly correlated to higher probabilities of failure.
- Conversely, little correlation was found for increased temperature and no correlation for usage level. However, the research showed that a large proportion (56%) of the failed drives failed without recording any count in the "four strong S.M.A.R.T. warnings" identified as scan errors, reallocation count, offline reallocation, and probational count.
- Further, 36% of failed drives did so without recording any S.M.A.R.T. error at all, except the temperature, meaning that S.M.A.R.T. data alone was of limited usefulness in anticipating failures.
On Debian systems:
sudo apt-get install smartmontoolsBy default when you install it all your drives are checked periodically with the
smartddaemon under thesmartmontoolssystemd service.Usage
Running the tests
S.M.A.R.T. drives may offer a number of self-tests:
- Short: Checks the electrical and mechanical performance as well as the read performance of the disk. Electrical tests might include a test of buffer RAM, a read/write circuitry test, or a test of the read/write head elements. Mechanical test includes seeking and servo on data tracks. Scans small parts of the drive's surface (area is vendor-specific and there is a time limit on the test). Checks the list of pending sectors that may have read errors, and it usually takes under two minutes.
- Long/extended: A longer and more thorough version of the short self-test, scanning the entire disk surface with no time limit. This test usually takes several hours, depending on the read/write speed of the drive and its size. It is possible for the long test to pass even if the short test fails.
- Conveyance: Intended as a quick test to identify damage incurred during transporting of the device from the drive manufacturer to the computer manufacturer. Only available on ATA drives, and it usually takes several minutes.
Drives remain operable during self-test, unless a "captive" option (ATA only) is requested.
Long test
Start with a long self test with
smartctl. Assuming the disk to test is/dev/sdd:smartctl -t long /dev/sddThe command will respond with an estimate of how long it thinks the test will take to complete.
To check progress use:
martctl -A /dev/sdd | grep remaining smartctl -c /dev/sdd | grep remainingDon't check too often because it can abort the test with some drives. If you receive an empty output, examine the reported status with:
`bash smartctl -l selftest /dev/sddIf errors are shown, check the
dmesgas there are usually useful traces of the error. -
The output of a
smartctlcommand is difficult to read:smartctl 5.40 2010-03-16 r3077 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: SAMSUNG SpinPoint F2 EG series Device Model: SAMSUNG HD502HI Serial Number: S1VZJ9CS712490 Firmware Version: 1AG01118 User Capacity: 500,107,862,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 3b Local Time is: Wed Feb 9 15:30:42 2011 CET SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: (6312) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 106) minutes. Conveyance self-test routine recommended polling time: ( 12) minutes. SCT capabilities: (0x003f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 099 099 051 Pre-fail Always - 2376 3 Spin_Up_Time 0x0007 091 091 011 Pre-fail Always - 3620 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 405 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 253 253 051 Pre-fail Always - 0 8 Seek_Time_Performance 0x0025 100 100 015 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 717 10 Spin_Retry_Count 0x0033 100 100 051 Pre-fail Always - 0 11 Calibration_Retry_Count 0x0012 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 405 13 Read_Soft_Error_Rate 0x000e 099 099 000 Old_age Always - 2375 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 84 End-to-End_Error 0x0033 100 100 000 Pre-fail Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 2375 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 084 074 000 Old_age Always - 16 (Lifetime Min/Max 16/16) 194 Temperature_Celsius 0x0022 084 071 000 Old_age Always - 16 (Lifetime Min/Max 16/16) 195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always - 3558 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 098 098 000 Old_age Always - 81 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 100 100 000 Old_age Always - 1 200 Multi_Zone_Error_Rate 0x000a 100 100 000 Old_age Always - 0 201 Soft_Read_Error_Rate 0x000a 253 253 000 Old_age Always - 0 MART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.Checking overall health
Somewhere in your report you'll see something like:
=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSEDIf it doesn’t return PASSED, you should immediately backup all your data. Your hard drive is probably failing.
That message can also be shown with
smartctl -H /dev/sdaEach drive manufacturer defines a set of attributes, and sets threshold values beyond which attributes should not pass under normal operation. But they do not agree on precise attribute definitions and measurement units, the following list of attributes is a general guide only.
If one or more attribute have the "prefailure" flag, and the "current value" of such prefailure attribute is smaller than or equal to its "threshold value" (unless the "threshold value" is 0), that will be reported as a "drive failure". In addition, a utility software can send SMART RETURN STATUS command to the ATA drive, it may report three status: "drive OK", "drive warning" or "drive failure".
Every of the SMART attributes has several columns as shown by “smartctl -a
”: - ID: The ID number of the attribute, good for comparing with other lists like Wikipedia: S.M.A.R.T.: Known ATA S.M.A.R.T. attributes because the attribute names sometimes differ. Name: The name of the SMART attribute.
- Value: The current, normalized value of the attribute. Higher values are always better (except for temperature for hard disks of some manufacturers). The range is normally 0-100, for some attributes 0-255 (so that 100 resp. 255 is best, 0 is worst). There is no standard on how manufacturers convert their raw value to this normalized one: when the normalized value approaches threshold, it can do linearily, exponentially, logarithmically or any other way, meaning that a doubled normalized value does not necessarily mean “twice as good”.
- Worst: The worst (normalized) value that this attribute had at any point of time where SMART was enabled. There seems to be no mechanism to reset current SMART attribute values, but this still makes sense as some SMART attributes, for some manufacturers, fluctuate over time so that keeping the worst one ever is meaningful.
- Threshold: The threshold below which the normalized value will be considered “exceeding specifications”. If the attribute type is “Pre-fail”, this means that SMART thinks the hard disk is just before failure. This will “trigger” SMART: setting it from “SMART test passed” to “SMART impending failure” or similar status.
- Type: The type of the attribute. Either “Pre-fail” for attributes that are said to indicate impending failure, or “Old_age” for attributes that just indicate wear and tear. Note that one and the same attribute can be classified as “Pre-fail” by one manufacturer or for one model and as “Old_age” by another or for another model. This is the case for example for attribute Seek_Error_Rate (ID 7), which is a widespread phenomenon on many disks and not considered critical by some manufacturers, but Seagate has it as “Pre-fail”.
- Raw value: The current raw value that was converted to the normalized value above. smartctl shows all as decimal values, but some attribute values of some manufacturers cannot be reasonably interpreted that way
-
New: Reacting to SMART Values.
It is said that a drive that starts getting bad sectors (attribute ID 5) or “pending” bad sectors (attribute ID 197; they most likely are bad, too) will usually be trash in 6 months or less. The only exception would be if this does not happen: that is, bad sector count increases, but then stays stable for a long time, like a year or more. For that reason, one normally needs a diagramming / journaling tool for SMART. Many admins will exchange the hard drive if it gets reallocated sectors (ID 5) or sectors “under investigation” (ID 197)
Of all the attributes I'm going to analyse only the critical ones
Read Error Rate
ID: 01 (0x01) deal: Low +Correlation with probability of failure: not clear
(Vendor specific raw value.) Stores data related to the rate of hardware read errors that occurred when reading data from a disk surface. The raw value has different structure for different vendors and is often not meaningful as a decimal number. For some drives, this number may increase during normal operation without necessarily signifying errors.
Reallocated Sectors Count
ID: 05 (0x05) Ideal: Low Correlation with probability of failure: Strong
Count of reallocated sectors. The raw value represents a count of the bad sectors that have been found and remapped. Thus, the higher the attribute value, the more sectors the drive has had to reallocate. This value is primarily used as a metric of the life expectancy of the drive; a drive which has had any reallocations at all is significantly more likely to fail in the immediate months. If Raw value of 0x05 attribute is higher than its Threshold value, that will reported as "drive warning".
Spin Retry Count
ID: 10 (0x0A) Ideal: Low Correlation with probability of failure: Strong
Count of retry of spin start attempts. This attribute stores a total count of the spin start attempts to reach the fully operational speed (under the condition that the first attempt was unsuccessful). An increase of this attribute value is a sign of problems in the hard disk mechanical subsystem.
Current Pending Sector Count
ID: 197 (0xC5) Ideal: Low Correlation with probability of failure: Strong
Count of "unstable" sectors (waiting to be remapped, because of unrecoverable read errors). If an unstable sector is subsequently read successfully, the sector is remapped and this value is decreased. Read errors on a sector will not remap the sector immediately (since the correct value cannot be read and so the value to remap is not known, and also it might become readable later); instead, the drive firmware remembers that the sector needs to be remapped, and will remap it the next time it has been successfully read.[76]
However, some drives will not immediately remap such sectors when successfully read; instead the drive will first attempt to write to the problem sector, and if the write operation is successful the sector will then be marked as good (in this case, the "Reallocation Event Count" (0xC4) will not be increased). This is a serious shortcoming, for if such a drive contains marginal sectors that consistently fail only after some time has passed following a successful write operation, then the drive will never remap these problem sectors. If Raw value of 0xC5 attribute is higher than its Threshold value, that will reported as "drive warning"
(Offline) Uncorrectable Sector Count
ID: 198 (0xC6) Ideal: Low Correlation with probability of failure: Strong
The total count of uncorrectable errors when reading/writing a sector. A rise in the value of this attribute indicates defects of the disk surface and/or problems in the mechanical subsystem.
In the 60 days following the first uncorrectable error on a drive (S.M.A.R.T. attribute 0xC6 or 198) detected as a result of an offline scan, the drive was, on average, 39 times more likely to fail than a similar drive for which no such error occurred.
Non critical SMART attributes
The next attributes appear to change in the logs but that doesn't mean that there is anything going wrong
Hardware ECC Recovered
ID: 195 (0xC3) Ideal: Varies Correlation with probability of failure: Low
(Vendor-specific raw value.) The raw value has different structure for different vendors and is often not meaningful as a decimal number. For some drives, this number may increase during normal operation without necessarily signifying errors.
-
New: Monitorization.
To monitor your drive health you can use prometheus with alertmanager for alerts and grafana for dashboards.
Installing the exporter
The prometheus community has it's own smartctl exporter
Using the binary
You can download the latest binary from the repository releases and configure the systemd service
unp smartctl_exporter-0.13.0.linux-amd64.tar.gz sudo mv smartctl_exporter-0.13.0.linux-amd64/smartctl_exporter /usr/binAdd the service to
/etc/systemd/system/smartctl-exporter.service[Unit] Description=smartctl exporter service After=network-online.target [Service] Type=simple PIDFile=/run/smartctl_exporter.pid ExecStart=/usr/bin/smartctl_exporter User=root Group=root SyslogIdentifier=smartctl_exporter Restart=on-failure RemainAfterExit=no RestartSec=100ms StandardOutput=journal StandardError=journal [Install] WantedBy=multi-user.targethen enable it:
sudo systemctl enable smartctl-exporter sudo service smartctl-exporter start--- services: smartctl-exporter: container_name: smartctl-exporter image: prometheuscommunity/smartctl-exporter privileged: true user: root ports: - "9633:9633"Configuring prometheus
Add the next scraping metrics:
- job_name: smartctl_exporter metrics_path: /metrics scrape_timeout: 60s static_configs: - targets: [smartctl-exporter:9633] labels: hostname: "your-hostname"Configuring the alerts
Taking as a reference the awesome prometheus rules and this wired post I'm using the next rules:
--- groups: - name: smartctl exporter rules: - alert: SmartDeviceTemperatureWarning expr: smartctl_device_temperature > 60 for: 2m labels: severity: warning annotations: summary: Smart device temperature warning (instance {{ $labels.hostname }}) description: "Device temperature warning (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: SmartDeviceTemperatureCritical expr: smartctl_device_temperature > 80 for: 2m labels: severity: critical annotations: summary: Smart device temperature critical (instance {{ $labels.hostname }}) description: "Device temperature critical (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: SmartCriticalWarning expr: smartctl_device_critical_warning > 0 for: 15m labels: severity: critical annotations: summary: Smart critical warning (instance {{ $labels.hostname }}) description: "device has critical warning (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: SmartNvmeWearoutIndicator expr: smartctl_device_available_spare{device=~"nvme.*"} < smartctl_device_available_spare_threshold{device=~"nvme.*"} for: 15m labels: severity: critical annotations: summary: Smart NVME Wearout Indicator (instance {{ $labels.hostname }}) description: "NVMe device is wearing out (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: SmartNvmeMediaError expr: smartctl_device_media_errors > 0 for: 15m labels: severity: warning annotations: summary: Smart NVME Media errors (instance {{ $labels.hostname }}) description: "Contains the number of occurrences where the controller detected an unrecovered data integrity error. Errors such as uncorrectable ECC, CRC checksum failure, or LBA tag mismatch are included in this field (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: SmartSmartStatusError expr: smartctl_device_smart_status < 1 for: 15m labels: severity: critical annotations: summary: Smart general status error (instance {{ $labels.hostname }}) description: " (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: DiskReallocatedSectorsIncreased expr: smartctl_device_attribute{attribute_id="5", attribute_value_type="raw"} > max_over_time(smartctl_device_attribute{attribute_id="5", attribute_value_type="raw"}[1h]) labels: severity: warning annotations: summary: "SMART Attribute Reallocated Sectors Count Increased" description: "The SMART attribute 5 (Reallocated Sectors Count) has increased on {{ $labels.device }} (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: DiskSpinRetryCountIncreased expr: smartctl_device_attribute{attribute_id="10", attribute_value_type="raw"} > max_over_time(smartctl_device_attribute{attribute_id="10", attribute_value_type="raw"}[1h]) labels: severity: warning annotations: summary: "SMART Attribute Spin Retry Count Increased" description: "The SMART attribute 10 (Spin Retry Count) has increased on {{ $labels.device }} (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: DiskCurrentPendingSectorCountIncreased expr: smartctl_device_attribute{attribute_id="197", attribute_value_type="raw"} > max_over_time(smartctl_device_attribute{attribute_id="197", attribute_value_type="raw"}[1h]) labels: severity: warning annotations: summary: "SMART Attribute Current Pending Sector Count Increased" description: "The SMART attribute 197 (Current Pending Sector Count) has increased on {{ $labels.device }} (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: DiskUncorrectableSectorCountIncreased expr: smartctl_device_attribute{attribute_id="198", attribute_value_type="raw"} > max_over_time(smartctl_device_attribute{attribute_id="198", attribute_value_type="raw"}[1h]) labels: severity: warning annotations: summary: "SMART Attribute Uncorrectable Sector Count Increased" description: "The SMART attribute 198 (Uncorrectable Sector Count) has increased on {{ $labels.device }} (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"Configuring the grafana dashboards
Of the different grafana dashboards (1, 2, 3) I went for the first one.
Import it with the UI of grafana, make it work and then export the json to store it in your infra as code respository.
References
-
New: Thoughts on adding new disks to ZFS.
When it comes to expanding an existing ZFS storage system, careful consideration is crucial. In my case, I faced a decision point with my storage cluster: after two years of reliable service from my 8TB drives, I needed more capacity. This led me to investigate the best way to integrate newly acquired refurbished 12TB drives into the system. Here's my journey through this decision-making process and the insights gained along the way.
The Starting Point
My existing setup consisted of 8TB drives purchased new, which had been running smoothly for two years. The need for expansion led me to consider refurbished 12TB drives as a cost-effective solution. However, mixing new and refurbished drives, especially of different capacities, raised several important considerations that needed careful analysis.
Initial Drive Assessment
The first step was to evaluate the reliability of all drives. Using
smartctl, I analyzed the SMART data across both the existing and new drives:for disk in a b c d e f g h i; do echo "/dev/sd$disk: old $(smartctl -a /dev/sd$disk | grep Old | wc -l) pre-fail: $(smartctl -a /dev/sd$disk | grep Pre- | wc -l)" doneThe results showed similar values across all drives, with "Old_Age" attributes ranging from 14-17 and "Pre-fail" attributes between 3-6. While this indicated all drives were aging, they were still functioning with acceptable parameters. However, raw SMART data doesn't tell the whole story, especially when comparing new versus refurbished drives.
Drive Reliability Considerations
After careful evaluation, I found myself trusting the existing 8TB drives more than the newer refurbished 12TB ones. This conclusion was based on several factors:
- The 8TB drives had a proven track record in my specific environment
- Their smaller size meant faster resilver times, reducing the window of vulnerability during recovery
- One of the refurbished 12TB drives was already showing concerning symptoms (8 reallocated sectors, although a badblocks didn't increase that number), which reduced confidence in the entire batch
- The existing drives were purchased new, while the 12TB drives were refurbished, adding an extra layer of uncertainty
Layout Options Analysis
When expanding a ZFS system, there's always the temptation to simply add more vdevs to the existing pool. However, I investigated two main approaches:
- Creating a new separate ZFS pool with the new disks
- Add another vdev to the existent pool
Resilver time
Adding the 12TB drives to the pool and redistributing the data across all 8 drives will help reduce the resilver time. Here's a detailed breakdown:
-
Current Situation
-
4x 8TB drives at 95% capacity means each drive is heavily packed
- High data density means longer resilver times
-
Limited free space for data movement and reconstruction
-
After Adding 12TB Drives
-
Total pool capacity increases significantly
- ZFS will automatically start rebalancing data across all 8 drives
- This process (sometimes called "data shuffling" or "data redistribution") has several benefits:
- Reduces data density per drive
- Creates more free space
- Improves overall pool performance
-
Potentially reduces future resilver times
-
Resilver Time Reduction Mechanism
-
With data spread across more drives, each individual drive has less data to resilver
- Less data per drive = faster resilver process
- The redistribution happens gradually and in the background
Understanding Failure Scenarios
The key differentiator between these approaches came down to failure scenarios:
Single Drive Failure
Both configurations handle single drive failures similarly, though the 12TB drives' longer resilver time creates a longer window of vulnerability in the two-vdev configuration if the data load is evenly shared between the disks. This is particularly concerning with refurbished drives, where the failure probability might be higher.
However if as soon as you add the other vdev to the pool you defragment the data inside zfs, the 8TB drives will be less filled, so until more data is added you may reduce the resilver time as they have less data.
Double Drive Failure
This is where the configurations differ significantly:
- In a two-vdev pool, losing two drives from the same vdev would cause complete pool failure
- With separate pools, a double drive failure would only affect one pool, allowing the other to continue operating. This way you can store the critical data on the pool you trust more.
- Given the mixed drive origins (new vs refurbished), isolating potential failures becomes more critical
Performance Considerations
While investigating performance implications, I found several interesting points about IOPS and throughput:
- ZFS stripes data across vdevs, meaning more vdevs generally means better IOPS
- In RAIDZ configurations, IOPS are limited by the slowest drive in the vdev
- Multiple mirrored vdevs provide the best combined IOPS performance
- Streaming speeds scale with the number of data disks in a RAIDZ vdev
- When mixing drive sizes, ZFS tends to favor larger vdevs, which could lead to uneven wear
Easiness of configuration
Cache and log
If you already have a zpool with a cache and logs in nvme, then if you were to use two pools, you'd need to reformat your nvme drives to create space for the new partitions needed for the new zpool.
This would allow you to specify different cache sizes for each pool. But it comes at the cost of a more complex operation.
New pool creation
Adding a vdev to an existing pool is quicker and easier than to create a zpool. You need to make sure that you initialise it with the correct configuration.
Storage management
Having two pools doubles the operation tasks. One of the pools are to be filled soon, so you may need to manually move files and directories around to rebalance it.
Final Decision
After weighing all factors, if you favour reliability over easiness of your life implement two separate ZFS pools. This statement is primarily driven by:
- Enhanced Reliability: By separating the pools, we can maintain service availability even if one pool fails completely
- Data Prioritization: This allows placing critical application data on the more reliable pool (8TB drives), while using the refurbished drives for less critical data like media files
- Risk Isolation: Keeping the proven, new-purchased drives separate from the refurbished ones minimizes the impact of potential issues with the refurbished drives
- Consistent Performance: Following the best practice of keeping same-sized drives together in pools
However I'm currently favouring easiness of life and trust my backup solution (I hope not to read this line in the future with regret :P), so I'll go with two vdevs.
Key Takeaways
Through this investigation, I learned several important lessons about ZFS storage design:
- Raw parity drive count isn't the only reliability metric - configuration matters more than simple redundancy numbers
- Pool layout significantly impacts both performance and failure scenarios
- Sometimes simpler configurations (like separate pools) can provide better overall reliability than more complex ones
- Consider the full lifecycle of the storage, including maintenance operations like resilver times
- When expanding storage, don't underestimate the value of isolating different generations or sources of hardware
- The history and source of drives (new vs refurbished) should influence your pool design decisions
This investigation reinforced that storage design isn't just about maximizing space or performance - it's about finding the right balance of reliability, performance, and manageability for your specific needs. When dealing with mixed drive sources and different capacities, this balance becomes even more critical.
References and further reading
diff --git a/mkdocs.yml b/mkdocs.yml index 74292dd717..596f1506e8 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -103,6 +103,8 @@ nav: - Email clients: - himalaya: himalaya.md - alot: alot.md + - k9: k9.md + - Email protocols: - Maildir: maildir.md - Instant Messages Management: @@ -371,6 +373,7 @@ nav: - File management configuration: - NeoTree: neotree.md - Telescope: telescope.md + - fzf.nvim: fzf_nvim.md - Editing specific configuration: - vim_editor_plugins.md - Vim formatters: vim_formatters.md @@ -566,7 +569,10 @@ nav: - OpenZFS storage planning: zfs_storage_planning.md - Sanoid: sanoid.md - ZFS Prometheus exporter: zfs_exporter.md - - Hard drive health: hard_drive_health.md + - Hard drive health: + - hard_drive_health.md + - Smartctl: smartctl.md + - badblocks: badblocks.md - Resilience: - linux_resilience.md - Memtest: memtest.md @@ -768,7 +774,8 @@ nav: # - Streaming channels: streaming_channels.md - Music: - Sister Rosetta Tharpe: sister_rosetta_tharpe.md - - Video Gaming: + - Videogames: + - DragonSweeper: dragonsweeper.md - King Arthur Gold: kag.md - The Battle for Wesnoth: - The Battle for Wesnoth: wesnoth.md
-
New: Add link tot he backblaze disk reports.
If you have some time take a look at backblaze disk reports they do quarterly analysis on their infra (around 300k disks).
@@ -26,6 +26,7 @@ nav: - Feminism: - Privileges: feminism/privileges.md - Palestine: palestine.md + - Detección de estupas: estupas.md - Anarchism: anarchism.md - Memoria histórica: memoria_historica.md - Anti-Tourism: antitourism.md @@ -33,19 +34,22 @@ nav: - Free Knowledge: free_knowledge.md - Free Software: free_software.md - Environmentalism: environmentalism.md - - Laboral: - - laboral.md - - Trabajadoras del hogar: trabajadoras_del_hogar.md + - Laboral: + - laboral.md + - Trabajadoras del hogar: trabajadoras_del_hogar.md - Collaborating tools: collaborating_tools.md - Conference organisation: - conference_organisation.md - pretalx: pretalx.md - - Ludditest: luddites.md - - Life Management: + - Luddites: luddites.md + - Life navigation: - life_management.md - - Time management: + - Time navigation: - time_management.md - - Time management abstraction levels: time_management_abstraction_levels.md + - Time navigation abstraction levels: + - time_management_abstraction_levels.md + - Identities: identities.md + - Axis: axis.md - Action Management: action_management.md - Roadmap Adjustment: - roadmap_adjustment.md @@ -134,6 +138,7 @@ nav: - Book Management: - book_management.md - Bookwyrm: bookwyrm.md + - Book DRM: book_drm.md - Movies Management: - Jellyfin: jellyfin.md - Ombi: ombi.md @@ -196,6 +201,7 @@ nav: - Teeth: - teeth.md - Deep cleaning: teeth_deep_cleaning.md + - Silence: silence.md - Remote Working: remote_work.md - Fitness Tracker: - fitness_band.md @@ -649,6 +655,8 @@ nav: - fail2ban: linux/fail2ban.md - pass: pass.md - Wireshark: wireshark.md + - Canary tokens: canary_tokens.md + - Sysadmin tools: - brew: linux/brew.md - detox: detox.md @@ -753,6 +761,7 @@ nav: - Speech to text: - Whisper: whisper.md - Speech recognition: speech_recognition.md + - Text to speech: text_to_speech.md - Coding by Voice: coding_by_voice.md - Data Analysis: - data_analysis.md
-
New: Remove the public IP of an ec2 instance.
- Navigate to the network interfaces of the instance
- Click on the one that contains the public IP
- Actions/Manage IP addresses
- Click on the Interface to unfold the configuration
- Click on Auto-assign public IP
-
New: Introduce vector.
Vector is a lightweight, ultra-fast tool for building observability pipelines
Installation
First, add the Vector repo:
bash -c "$(curl -L https://setup.vector.dev)"Then you can install the vector package:
sudo apt-get install vectorTweak the configuration and then enable the service.
To be sure that vector is able to push to loki create the
/usr/local/bin/wait-for-loki.shfilewhile true; do response=$(curl -s http://localhost:3100/ready 2>/dev/null) if [ "$response" = "ready" ]; then break fi sleep 1 doneMake it executable
chmod +x /usr/local/bin/wait-for-loki.shThen update your
vector.service(/usr/lib/systemd/system/vector.service)ExecStartPre=/usr/local/bin/wait-for-loki.sh ExecStartPre=/usr/bin/vector validateRun
systemctl daemon-reloadto reload the service configuration.The config lives at
/etc/vector/vector.yaml.First add
vectorto thedockergroup:usermod -a -G docker vectorsources: docker: type: docker_logs transforms: docker_labels: type: remap inputs: - docker source: | .service_name = get(.label, ["com.docker.compose.project"]) ?? "unknown" sinks: loki_docker: type: loki inputs: - docker_labels endpoint: http://localhost:3100/ encoding: codec: json labels: source: docker host: "{{ host }}" container: "{{ container_name }}" service_name: "{{ service_name }}"To avoid the services that run docker to be indexed twice
sources: journald: type: journald transforms: journald_filter: type: filter inputs: - journald condition: | # Exclude docker-compose systemd services !contains(string!(.SYSLOG_IDENTIFIER), "docker-compose") && !contains(string!(.SYSLOG_IDENTIFIER), "docker") journald_labels: type: remap inputs: - journald_filter source: | .service_name = ._SYSTEMD_UNIT || "unknown" sinks: loki_systemd: type: loki inputs: - journald_labels endpoint: http://localhost:3100/ encoding: codec: json labels: source: journald host: "{{ host }}" service_name: "{{ service_name }}"ZFS
Prepare the file to be readable by vector:
chown root:vector /proc/spl/kstat/zfs/dbgmsg chmod 640 /proc/spl/kstat/zfs/dbgmsgTroubleshootingsources: zfs_log: type: file include: - /proc/spl/kstat/zfs/dbgmsg zfs_files: type: loki inputs: - zfs_log endpoint: http://localhost:3100/ encoding: codec: json labels: source: file service_name: zfs host: "{{ host }}" filename: "{{ file }}" sinks:Unable to open checkpoint file. path="/var/lib/vector/journald/checkpoint.txt"
ERROR source{component_kind="source" component_id=journald component_type=journald}: vector::internal_events::journald: Unable to open checkpoint file. path="/var/lib/vector/journald/checkpoint.txt" error=Permission denied (os error 13) error_type="io_failed" stage="receiving"sudo mkdir -p /var/lib/vector/journald sudo chown -R vector:vector /var/lib/vector sudo chmod 755 /var/lib/vector sudo chmod 755 /var/lib/vector/journaldReferences
-
New: Vector Permission Debugging with systemd tmpfiles.
Vector fails to read log files after reboots or log rotation with permission errors:
ERROR: Failed reading file for fingerprinting. error=Permission denied (os error 13)Solution with tmpfiles
Create
/etc/tmpfiles.d/vector-permissions.conf:z /data/apps/myapp/logs/logfile.log 0644 vector vector - z /path/to/another/logfile.log 0644 vector vector -Apply immediately:
systemd-tmpfiles --create /etc/tmpfiles.d/vector-permissions.confWhat is tmpfiles
systemd tmpfiles.d is a mechanism for managing temporary files, directories, and their permissions at boot time and periodically during system operation.
What it does:
- Creates/removes files and directories
- Sets ownership and permissions
- Runs at boot via systemd-tmpfiles-setup.service
- Can be triggered manually or periodically
Configuration format:
Type Path Mode UID GID Age ArgumentCommon types:
d- Create directoryf- Create filez- Set ownership/permissions on existing pathZ- Recursively set ownership/permissionsx- Ignore/exclude path
Example:
d /var/run/myapp 0755 myuser mygroup - f /var/log/myapp.log 0644 myuser mygroup - z /existing/file 0644 myuser mygroup -Files in /etc/tmpfiles.d/ with .conf extension are processed automatically at boot and can be manually applied with systemd-tmpfiles --create. Also, if the
systemd-tmpfiles-clean.timeris enabled (which is by default) it will be check each day.
OpenZFS⚑
-
New: Check the health of a disk with badblocks.
The
badblockscommand will write and read the disk with different patterns, thus overwriting the whole disk, so you will loose all the data in the disk.This test is good for rotational disks as there is no disk degradation on massive writes, do not use it on SSD though.
WARNING: be sure that you specify the correct disk!!
badblocks -wsv -b 4096 /dev/sde | tee disk_analysis_log.txtIf errors are shown is that all of the spare sectors of the disk are used, so you must not use this disk anymore. Again, check
dmesgfor traces of disk errors. -
New: Get the node architecture of the pods of a deployment.
Here are a few ways to check the node architecture of pods in a deployment:
-
Get the nodes where the pods are running:
This will show which nodes are running your pods.kubectl get pods -l app=your-deployment-label -o wide -
Then check the architecture of those nodes:
kubectl get nodes -o custom-columns=NAME:.metadata.name,ARCH:.status.nodeInfo.architecture
Or you can combine this into a single command:
kubectl get pods -l app=your-deployment-label -o json | jq -r '.items[].spec.nodeName' | xargs -I {} kubectl get node {} -o custom-columns=NAME:.metadata.name,ARCH:.status.nodeInfo.architectureYou can also check if your deployment is explicitly targeting specific architectures through node selectors or affinity rules:
kubectl get deployment your-deployment-name -o yaml | grep -A 5 nodeSelector -
-
New: Removing a disk from the pool.
zpool remove tank0 sdaThis will trigger the data evacuation from the disk. Check
zpool statusto see when it finishes. -
New: Encrypting ZFS Drives with LUKS.
Warning: Proceed with Extreme Caution
IMPORTANT SAFETY NOTICE:
- These instructions will COMPLETELY WIPE the target drive
- Do NOT attempt on production servers
- Experiment only on drives with no valuable data
- Seek professional help if anything is unclear
Prerequisites
- A drive you want to encrypt (will be referred to as
/dev/sdx) - Root access
- Basic understanding of Linux command line
- Backup of all important data
Step 1: Create LUKS Encryption Layer
First, format the drive with LUKS encryption:
sudo cryptsetup luksFormat /dev/sdx- You'll be prompted for a sudo password
- Create a strong encryption password (mix of uppercase, lowercase, numbers, symbols)
- Note the precise capitalization in commands
Step 2: Open the Encrypted Disk
Open the newly encrypted disk:
sudo cryptsetup luksOpen /dev/sdx sdx_cryptThis creates a mapped device at
/dev/mapper/sdx_cryptStep 3: Create ZFS Pool or the vdev
For example to create a ZFS pool on the encrypted device:
sudo zpool create -f -o ashift=12 \ -O compression=lz4 \ + zpool /dev/mapper/sdx_cryptCheck the create zpool section to know which configuration flags to use.
Step 4: Set Up Automatic Unlocking
Generate a Keyfile
Create a random binary keyfile:
sudo dd bs=1024 count=4 if=/dev/urandom of=/etc/zfs/keys/sdx.key sudo chmod 0400 /etc/zfs/keys/sdx.keyAdd Keyfile to LUKS
Add the keyfile to the LUKS disk:
sudo cryptsetup luksAddKey /dev/sdx /etc/zfs/keys/sdx.key- You'll be asked to enter the original encryption password
- This adds the binary file to the LUKS disk header
- Now you can unlock the drive using either the password or the keyfile
Step 5: Configure Automatic Mounting
Find Drive UUID
Get the drive's UUID:
sudo blkidLook for the line with
TYPE="crypto_LUKS". Copy the UUID.Update Crypttab
Edit the crypttab file:
sudo vim /etc/crypttabAdd an entry like:
sdx_crypt UUID=your-uuid-here /etc/zfs/keys/sdx.key luks,discardFinal Step: Reboot
- Reboot your system
- The drive will be automatically decrypted and imported
Best Practices
+- Keep your keyfile and encryption password secure - Store keyfiles with restricted permissions - Consider backing up the LUKS header
Troubleshooting
- Double-check UUIDs
- Verify keyfile permissions
- Ensure cryptsetup and ZFS are installed
Security Notes
- This method provides full-disk encryption at rest
- Data is inaccessible without the key or password
- Protects against physical drive theft
Disclaimer
While these instructions are comprehensive, they come with inherent risks. Always:
- Have backups
- Test in non-critical environments first
- Understand each step before executing
Further reading
-
New: Add a disk to an existing vdev.
zpool add tank /dev/sdx -
New: Add a vdev to an existing pool.
``bash zpool add main raidz1-1 /dev/disk-1 /dev/disk-2 /dev/disk-3 /dev/disk-4 ```
You don't need to specify the
ashiftor theautoexpandas they are set on zpool creation. -
New: Add zfs book.
-
Correction: Replacing a disk in the pool.
If the pool is not DEGRADED
If you want to do operations on your pool and want to prevent it from being DEGRADED you need to attach a new disk to the server and use the replace command
Wherezfs replace tank0 ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx /dev/sdX/dev/sdXis your temporal disk. Once the original disk is removed from the vdev you can do the operations you need. -
New: Removing a disk from the pool.
zpool remove tank0 sdaThis will trigger the data evacuation from the disk. Check
zpool statusto see when it finishes.Sometimes zfs won't allow you to remove a disk if it will put at risk the pool. In that case try to replace a disk in the pool as explained above.
-
New: List all datasets that have zfs native encryption.
```bash ROOT_FS="main" is_encryption_enabled() { zfs get -H -o value encryption $1 | grep -q 'aes' }
list_datasets_with_encryption() {
# Initialize an array to hold dataset names datasets=() # List and iterate over all datasets starting from the root filesystem for dataset in $(zfs list -H -o name | grep -E '^'$ROOT_FS'/'); do if is_encryption_enabled "$dataset"; then datasets+=("$dataset") fi done # Output the results echo "ZFS datasets with encryption enabled:" printf '%s\n' "${datasets[@]}"}
list_datasets_with_encryption
-
New: Troubleshoot cannot destroy dataset: dataset is busy.
If you're experiencing this error and can reproduce the next traces:
cannot destroy 'zroot/2013-10-15T065955229209': dataset is busy cannot unmount 'zroot/2013-10-15T065955229209': not currently mounted zroot/2013-10-15T065955229209 2.86G 25.0G 11.0G /var/lib/heaver/instances/2013-10-15T065955229209 umount: /var/lib/heaver/instances/2013-10-15T065955229209: not mountedYou can
grep zroot/2013-10-15T065955229209 /proc/*/mountsto see which process is still using the dataset.Another possible culprit are snapshots, you can then run:
zfs holds $snapshotnameTo see if it has any holds, and if so,
zfs releaseto remove the hold. -
New: Upgrading ZFS Storage Pools.
If you have ZFS storage pools from a previous zfs release you can upgrade your pools with the
zpool upgradecommand to take advantage of the pool features in the current release. In addition, the zpool status command has been modified to notify you when your pools are running older versions. For example:zpool status pool: tank state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 errors: No known data errorsYou can use the following syntax to identify additional information about a particular version and supported releases:
zpool upgrade -v This system is currently running ZFS pool version 22. The following versions are supported: VER DESCRIPTION --- -------------------------------------------------------- 1 Initial ZFS version 2 Ditto blocks (replicated metadata) 3 Hot spares and double parity RAID-Z 4 zpool history 5 Compression using the gzip algorithm 6 bootfs pool property 7 Separate intent log devices 8 Delegated administration 9 refquota and refreservation properties 10 Cache devices 11 Improved scrub performance 12 Snapshot properties 13 snapused property 14 passthrough-x aclinherit 15 user/group space accounting 16 stmf property support 17 Triple-parity RAID-Z 18 Snapshot user holds 19 Log device removal 20 Compression using zle (zero-length encoding) 21 Reserved 22 Received properties For more information on a particular version, including supported releases, see the ZFS Administration Guide.Then, you can run the zpool upgrade command to upgrade all of your pools. For example:
zpool upgrade -a -
New: Monitor the zfs ram usage.
feat(linux_snippets#Unattended upgrades): Unattended upgrades- alert: HostOutOfMemory # if we don't add the node_zfs_arc_size, the ARC is taken as used space triggering the alert as a false positive expr: (node_memory_MemAvailable_bytes + node_zfs_arc_size)/ node_memory_MemTotal_bytes * 100 < 10 for: 5m labels: severity: warning annotations: summary: Host out of memory (instance {{ $labels.instance }}) message: "Node memory is filling up (< 10% left)\n VALUE = {{ $value\ \ }}"unattended-upgrades runs daily at a random time
How to tell when unattended upgrades will run today:
The random time is set by a cron job (/etc/cron.daily/apt.compat), and you can read the random time for today by asking systemd:
$ systemctl list-timers apt-daily.timer NEXT LEFT LAST PASSED UNIT ACTIVATES Tue 2017-07-11 01:53:29 CDT 13h left Mon 2017-07-10 11:22:40 CDT 1h 9min ago apt-daily.timer apt-daily.serviceIn this case, you can see that it ran 1 hour and 9 minutes ago.
How to tell if unattended upgrades are still running:
One easy way is to check the timestamp files for the various apt components:
$ ls -l /var/lib/apt/periodic/ total 0 -rw-r--r-- 1 root root 0 Jul 10 11:24 unattended-upgrades-stamp -rw-r--r-- 1 root root 0 Jul 10 11:23 update-stamp -rw-r--r-- 1 root root 0 Jul 10 11:24 update-success-stamp -rw-r--r-- 1 root root 0 Jul 10 11:24 upgrade-stampPutting the data together, you can see that the timer started apt at 11:22. It ran an update which completed at 11:23, then an upgrade which completed at 11:24. Finally, you can see that apt considered the upgrade to be a success (no error or other failure).
Obviously, if you see a recent timer without a corresponding completion timestamp, then you might want to check ps to see if apt is still running.
How to tell which step apt is running right now
One easy way is to check the logfile.
$ less /var/log/unattended-upgrades/unattended-upgrades.log 2017-07-10 11:23:00,348 INFO Initial blacklisted packages: 2017-07-10 11:23:00,349 INFO Initial whitelisted packages: 2017-07-10 11:23:00,349 INFO Starting unattended upgrades script 2017-07-10 11:23:00,349 INFO Allowed origins are: ['o=Ubuntu,a=zesty-security', 'o=Ubuntu,a=zesty-updates'] 2017-07-10 11:23:10,485 INFO Packages that will be upgraded: apport apport-gtk libpoppler-glib8 libpoppler-qt5-1 libpoppler64 poppler-utils python3-apport python3-problem-report 2017-07-10 11:23:10,485 INFO Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg.log' 2017-07-10 11:24:20,419 INFO All upgrades installedHere you can see the normal daily process, including the 'started' and 'completed' lines, and the list of packages that were about to be upgraded.
If the list of packages is not logged yet, then apt can be safely interrupted. Once the list of packages is logged, DO NOT interrupt apt.
**Check the number of packages that need an upgrade **
**Manually run the unattended upgrades **apt list --upgradeableunattended-upgrade -d
unison⚑
-
New: Installation.
If you are on debian or ubuntu, the version of the repositories does not allow you to run the program with the file watcher, so you may need to build it yourself:
First install the dependencies:
sudo apt-get install ocaml-native-compilersexport UNISON_VERSION=2.53.8 echo "Install Unison." \ && pushd /tmp \ && wget https://github.com/bcpierce00/unison/archive/v$UNISON_VERSION.tar.gz \ && tar -xzvf v$UNISON_VERSION.tar.gz \ && rm v$UNISON_VERSION.tar.gz \ && pushd unison-$UNISON_VERSION \ && make \ && cp -t /usr/local/bin ./src/unison ./src/unison-fsmonitor \ && popd \ && rm -rf unison-$UNISON_VERSION \ && popdThen remove the ocaml compilers as they take quite some space:
sudo apt-get remove ocaml-native-compilers -
New: Run in the background watching changes.
Create the systemd service in:
~/.config/systemd/user/unison.service(assuming that your profile is orgfiles)[Unit] Description=unison [Service] ExecStart=/usr/local/bin/unison orgfiles Restart=on-failure RestartSec=3 [Install] WantedBy=default.target
renovate⚑
-
New: Add mention to sixos.
Think about using sixos instead, a nixos without systemd
-
New: Installation in gitea actions.
- Create Renovate Bot Account and generate a token for the Gitea Action secret
- Add the renovate bot account as collaborator with write permissions to the repository you want to update.
- Create a repository to store our Renovate bot configurations, assuming called renovate-config.
In renovate-config, create a file config.js to configure Renovate:
module.exports = { "endpoint": "https://gitea.com/api/v1", // replace it with your actual endpoint "gitAuthor": "Renovate Bot <renovate-bot@yourhost.com>", "platform": "gitea", "onboardingConfigFileName": "renovate.json", "autodiscover": true, "optimizeForDisabled": true, };If you're using mysql or you see errors like
.../repository/pulls 500 internal erroryou may need to setunicodeEmoji: false. -
New: Configure renovate.
By default, Renovate raises PRs but leaves them to someone or something else to merge them. By configuring this setting, you allow Renovate to automerge PRs or even branches. Using automerge reduces the amount of human intervention required.
Usually you won't want to automerge all PRs, for example most people would want to leave major dependency updates to a human to review first. You could configure Renovate to automerge all but major this way:
{ "packageRules": [ { "matchUpdateTypes": ["minor", "patch", "pin", "digest"], "automerge": true } ] }Also note that this option can be combined with other nested settings, such as dependency type. So for example you could choose to automerge all (passing) devDependencies only this way:
{ "packageRules": [ { "matchDepTypes": ["devDependencies"], "automerge": true } ] }Configure docker version extraction
- Ansible Manager Docker-type dependency extraction · renovatebot/renovate · Discussion #18190 · GitHub
- Automated Dependency Updates for Ansible - Renovate Docs
- Pin packages in ansible roles · Issue #3720 · renovatebot/renovate
- Support default environment variable values in docker-compose · Issue #4635 · renovatebot/renovate
- Support docker-compose.yml versions from .env files · Issue #31685 · renovatebot/renovate
Combine all updates to one branch/PR
-
New: Add upgrade notes.
Debugging⚑
-
New: PVC or PV is stuck deleting.
When PVs and PVCs get stuck during deletion, it's usually due to finalizers that prevent the cleanup process from completing. Here are several approaches to resolve this:
Check for Finalizers
First, examine what's preventing the deletion:
kubectl get pv <pv-name> -o yaml | grep finalizers -A 5 kubectl get pvc <pvc-name> -n <namespace> -o yaml | grep finalizers -A 5Remove Finalizers (Most Common Solution)
If you see finalizers like
kubernetes.io/pv-protectionorkubernetes.io/pvc-protection, you can remove them:kubectl patch pvc <pvc-name> -n <namespace> -p '{"metadata":{"finalizers":null}}' kubectl patch pv <pv-name> -p '{"metadata":{"finalizers":null}}'
Node Exporter⚑
-
New: Monitor host requires a reboot.
Node exporter does not support this metric, but you can monitor reboot requirements using Prometheus node exporter's textfile collector. Here's how to set it up:
Create the monitoring script
First, create a script that checks for the reboot-required file and outputs metrics in Prometheus format:
- Make the script executable and place it in the right location:
sudo cp reboot-required-check.sh /usr/local/bin/ sudo chmod +x /usr/local/bin/reboot-required-check.shTEXTFILE_DIR="/var/lib/node_exporter/textfile_collector" METRIC_FILE="$TEXTFILE_DIR/reboot_required.prom" mkdir -p "$TEXTFILE_DIR" if [ -f /var/run/reboot-required ]; then REBOOT_REQUIRED=1 else REBOOT_REQUIRED=0 fi cat > "$METRIC_FILE" << EOF node_reboot_required $REBOOT_REQUIRED EOF chmod 644 "$METRIC_FILE"- Ensure the textfile collector directory exists:
sudo mkdir -p /var/lib/node_exporter/textfile_collector sudo chown node_exporter:node_exporter /var/lib/node_exporter/textfile_collector- Create a systemd service to run the script periodically:
[Unit] Description=Check if system requires reboot After=network.target [Service] Type=oneshot ExecStart=/usr/local/bin/reboot-required-check.sh User=node_exporter Group=node_exporter [Install] WantedBy=multi-user.target- Create a systemd timer to run it regularly:
[Unit] Description=Check if system requires reboot every 5 minutes Requires=reboot-check.service [Timer] OnBootSec=1min OnUnitActiveSec=5min [Install] WantedBy=timers.target- Install and enable the systemd units:
sudo cp reboot-check.service /etc/systemd/system/ sudo cp reboot-check.timer /etc/systemd/system/ sudo systemctl daemon-reload sudo systemctl enable reboot-check.timer sudo systemctl start reboot-check.timer- Configure node exporter to use the textfile collector: Make sure your node exporter is started with the
--collector.textfile.directoryflag:
node_exporter --collector.textfile.directory=/var/lib/node_exporter/textfile_collectorPrometheus alerting rule
You can create an alerting rule in Prometheus to notify when a reboot is required:
groups: - name: system.rules rules: - alert: SystemRebootRequired expr: node_reboot_required == 1 for: 0m labels: severity: warning annotations: summary: "System {{ $labels.instance }} requires reboot" description: "System {{ $labels.instance }} requires a reboot due to: {{ $labels.reason }}"Testing
You can test the setup by:
- Run the script manually:
sudo /usr/local/bin/reboot-required-check.sh cat /var/lib/node_exporter/textfile_collector/reboot_required.prom- Check if the timer is working:
sudo systemctl status reboot-check.timer sudo journalctl -u reboot-check.service- Verify metrics are being collected: Visit
http://your-server:9100/metricsand search fornode_reboot_required
The metrics will show
node_reboot_required 1when a reboot is required andnode_reboot_required 0when it's not. Thenode_reboot_required_packages_infometric includes information about which packages triggered the reboot requirement.
Operating Systems⚑
dunst⚑
-
Correction: Tweak installation steps.
If it didn't create the systemd service you can create it yourself with this service filesudo apt install libdbus-1-dev libx11-dev libxinerama-dev libxrandr-dev libxss-dev libglib2.0-dev \ libpango1.0-dev libgtk-3-dev libxdg-basedir-dev libgdk-pixbuf-2.0-dev make WAYLAND=0 sudo make WAYLAND=0 install[Unit] Description=Dunst notification daemon Documentation=man:dunst(1) PartOf=graphical-session.target [Service] Type=dbus BusName=org.freedesktop.Notifications ExecStart=/usr/local/bin/dunst Slice=session.slice Environment=PATH=%h/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games [Install] WantedBy=default.targetYou may need to add more paths to PATH.
To see the logs of the service use
journalctl --user -u dunst.service -f --since "15 minutes ago" -
New: Configuration.
Read and tweak the
~/.dunst/dunstrcfile to your liking. You have the default one hereYou'll also need to configure the actions in your window manager. In my case i3wm:
bindsym $mod+b exec dunstctl close-all bindsym $mod+v exec dunstctl context -
New: Configure each application notification.
You can look at rosoau config for inspiration
References * Some dunst configs * Smarttech101 tutorials (1, 2) * Archwiki page on dunst
Linux Snippets⚑
-
New: Prevent the screen from turning off.
VESA Display Power Management Signaling (DPMS) enables power saving behaviour of monitors when the computer is not in use. The time of inactivity before the monitor enters into a given saving power level—standby, suspend or off—can be set as described in DPMSSetTimeouts(3).
It is possible to turn off your monitor with the xset command
It will disable DPMS and prevent screen from blankingxset s off -dpmsTo query the current settings:
xset qIf that doesn't work you can use the keep-presence program
pip install keep-presence keep-presence -cThat will move the cursor one pixel in circles each 300s, if you need to move it more often use the
-sflag. -
New: Protect the edition of a pdf with a password.
Use
pdftk. From its man page:Encrypt a PDF using 128-Bit Strength (the Default) and Withhold All Permissions (the Default)
$ pdftk [mydoc].pdf output [mydoc.128].pdf owner_pw [foopass]Same as Above, Except a Password is Required to Open the PDF
$ pdftk [mydoc].pdf output [mydoc.128].pdf owner_pw [foo] user_pw [baz]Same as Above, Except Printing is Allowed (after the PDF is Open)
$ pdftk [mydoc].pdf output [mydoc.128].pdf owner_pw [foo] user_pw [baz] allow priTo check if it has set the password correctly you can run:
pdftk "input.pdf" dump_data output /dev/null dont_ask -
New: Reduce the size of an image.
The simplest way of reducing the size of the image is by degrading the quality of the image.
convert <INPUT_FILE> -quality 50% <OUTPUT_FILE>The main difference between
convertandmogrifycommand is thatmogrifycommand applies the operations on the original image file, whereas convert does not.mogrify -quality 50 *.jpg -
New: Change the default shell of a user using the command line.
chsh -s /usr/bin/zsh lyz -
New: Introduce simplex chat.
Simplex chat is the first messenger without user IDs
I went to a talk in the 38c3 (december 2024), and even though the project looked good there were some stuff that pushed me away:
- The cypher has not been tested
- It's not fully open sourced
-
New: Record the audio from your computer.
You can record audio being played in a browser using
ffmpeg- Check your default audio source:
pactl list sources | grep -E 'Name|Description'- Record using
ffmpeg:
ffmpeg -f pulse -i <your_monitor_source> output.wavExample:
ffmpeg -f pulse -i alsa_output.pci-0000_00_1b.0.analog-stereo.monitor output.wav- Stop recording with Ctrl+C.
-
New: Download videos from rtve.es.
Use descargavideos.tv (source)
-
New: Check if a domain is in a list of known disposable email domains.
You can check in known lists
wget https://raw.githubusercontent.com/andreis/disposable-email-domains/master/domains.txt grep -i homapin.com domains.txtOr using web services that either use the IPs (obtained by whois/dig)
https://www.blocklist.de/en/search.html?ip=142.132.166.12&action=search&send=start+search 👍 https://www.blocklist.de/en/search.html?ip=188.166.111.252&action=search&send=start+search 👍 https://www.blocklist.de/en/search.html?ip=46.101.111.206&action=search&send=start+search 👍 https://www.blocklist.de/en/search.html?ip=116.202.9.167&action=search&send=start+search 👍 https://check.spamhaus.org/results/?query=homapin.com 👍 https://verifymail.io/domain/homapin.com 👎 https://www.ipqualityscore.com/domain-reputation/homapin.com 👎 https://quickemailverification.com/tools/disposable-email-address-detector for - homapin.com 👎 -
TL;DR: The syntax is as follows for the mount command:
mount -t iso9660 -o ro /dev/deviceName /path/to/mount/pointUse the following command to find out the name Of DVD / CD-ROM / Writer / Blu-ray device on a Linux based system:
lsblkOR use the combination of the dmesg command and grep/egrep as follow to print your CD/DVD device name. For example:
dmesg | grep -E -i --color 'cdrom|dvd|cd/rw|writer'Sample outputs indicating that the /dev/sr0 is my device name:
[ 5.437164] sr0: scsi3-mmc drive: 24x/24x writer dvd-ram cd/rw xa/form2 cdda tray [ 5.437307] cdrom: Uniform CD-ROM driver Revision: 3.20Create a mount point, type mkdir command as follows:
mkdir -p /mnt/cdromMount the /dev/cdrom or /dev/sr0 as follows:
mount -t iso9660 -o ro /dev/cdrom /mnt/cdrom -
New: Use lftp.
Connect with:
lftp -p <port> user@hostNavigate with
lsandcd. Get withmgetfor multiple things -
New: Difference between apt-get upgrate and apt-get full-upgrade.
The difference between
upgradeandfull-upgradeis that the later will remove the installed packages if that is needed to upgrade the whole system. Be extra careful when using this commandI will more frequently use
autoremoveto remove old packages and then just useupgrade. -
New: Upgrade debian.
sudo apt-get update sudo apt-get upgrade sudo apt-get full-upgrade sudo vi /etc/apt/sources.list /etc/apt/sources.list.d/* sudo apt-get clean sudo apt-get update sudo apt-get upgrade sudo apt-get full-upgrade sudo apt-get autoremove sudo shutdown -r now -
New: Get a list of extensions by file type.
There are community made lists such as dyne's file extension list
-
Correction: Upgrade ubuntu.
Upgrade your system:
sudo apt update sudo apt upgrade rebootYou must install ubuntu-release-upgrader-core package:
sudo apt install ubuntu-release-upgrader-coreEnsure the Prompt line in
/etc/update-manager/release-upgradesis set to ‘lts‘ using the “grep” or “cat”grep 'lts' /etc/update-manager/release-upgrades cat /etc/update-manager/release-upgradesOpening up TCP port 1022
For those using ssh-based sessions, open an additional SSH port using the ufw command, starting at port 1022. This is the default port set by the upgrade procedure as a fallback if the default SSH port dies during upgrades.
sudo /sbin/iptables -I INPUT -p tcp --dport 1022 -j ACCEPTFinally, start the upgrade from Ubuntu 22.04 to 24.04 LTS version. Type:
sudo do-release-upgrade -d -
file -i <path_to_file> -
New: Simulate the environment of a cron job.
Add this to your crontab (temporarily):
* * * * * env > ~/cronenvAfter it runs, do this:
env - `cat ~/cronenv` /bin/shThis assumes that your cron runs /bin/sh, which is the default regardless of the user's default shell.
Footnote: if env contains more advanced config, eg
PS1=$(__git_ps1 " (%s)")$, it will error crypticallyenv: ": No such file or directory. -
New: Resize a partition of an EC2 instance.
If it's the first partition of the first disk.
growpart /dev/nvme0n1 1 resize2fs /dev/nvme0n1p1 -
New: Debugging Inotify Watch Exhaustion: "No space left on device".
If you get
No space left on deviceerrors in systemd logs, but disk space is fine:systemd[755863]: my-service.service: Failed to add control inotify watch descriptor for control group: No space left on deviceThis is actually an inotify watches exhaustion issue, not disk space.
Diagnosis
Check Current Limits
cat /proc/sys/fs/inotify/max_user_watchesCount Current Watch Usage
find /proc/*/fdinfo/* -exec grep -c "^inotify" {} \; 2>/dev/null | awk '{sum+=$1} END {print sum}' for pid in $(ps -eo pid --no-headers); do if [[ -d /proc/$pid/fd ]]; then total=0 for fd in /proc/$pid/fd/*; do if [[ -L "$fd" ]] && readlink "$fd" 2>/dev/null | grep -q "anon_inotify"; then fdnum=$(basename "$fd") watches=$(grep -c "^inotify" /proc/$pid/fdinfo/$fdnum 2>/dev/null || echo 0) total=$((total + watches)) fi done if [[ $total -gt 0 ]]; then cmd=$(ps -p $pid -o comm --no-headers 2>/dev/null || echo "unknown") echo "$total $pid $cmd" fi fi done 2>/dev/null | sort -nr | head -10Find High File Descriptor Processes
Often the culprits have many open files:
for pid in $(ps -eo pid --no-headers); do if [[ -d /proc/$pid/fd ]]; then fd_count=$(ls /proc/$pid/fd 2>/dev/null | wc -l) if [[ $fd_count -gt 100 ]]; then cmd=$(ps -p $pid -o comm --no-headers 2>/dev/null) echo "$fd_count FDs: $pid $cmd" fi fi doneCommon Culprits
- Media servers: Jellyfin, Plex
- Download managers: Sonarr, Radarr, Lidarr
- File sync: Syncthing, Nextcloud
- Development tools: IDEs, file watchers
- Container platforms: Docker, containerd
Solutions
1. Increase Limits (Quick Fix)
echo 500000 > /proc/sys/fs/inotify/max_user_watches echo 'fs.inotify.max_user_watches=500000' >> /etc/sysctl.conf sysctl -pMemory cost: ~540 bytes per watch (500k watches ≈ 270MB kernel memory)
2. Configure Applications
Better long-term solution:
Sonarr/Radarr/Lidarr:
- Settings → Media Management → Disable "Scan for changes"
- Use scheduled scans instead
Jellyfin:
- Admin → Dashboard → Libraries → Disable real-time monitoring
- Use periodic library scans
Syncthing:
- Use polling instead of inotify for large directories
- Add
.stignorefor unnecessary paths
-
New: Manage bluetooth.
List devices
Once you've paired your devices you can see them with:
bluetoothctl devicesTo check the ones that are connected use:
bluetoothctl devices ConnectedConnect device
From the list above you will see the device ID, then you can:
bluetoothctl connect device_IDBut it's better to use
blueman-managerbecause it handles better the connections and disconnectionsMultidevice connection
Sometimes the laptop is not able to send the audio streams back to the connected device. Restart the controlling device with:
systemctl --user restart wireplumber.service -
New: Bluetooth Pairing Troubleshooting: When BLE Devices Won't Connect.
Bluetooth Low Energy (BLE) devices like wireless earbuds appear in device scans but fail to pair with "Device not available" errors, even though they're visible to other devices.
The root ’cause may be that the HCI controller corruption causing discovery operations to fail. The Bluetooth hardware gets stuck in a state where it rejects pairing attempts with error code -16 (EBUSY).
Symptoms
hcitool lescanshows the devicebluetoothctlscan shows device briefly or not at allbluetoothctl pair [MAC]returns "Device not available"dmesgshows HCI opcode failures like:Bluetooth: hci0: Opcode 0x0401 failed: -16
Solution
Complete Bluetooth stack reset:
sudo systemctl stop bluetooth sudo rmmod btusb sudo modprobe btusb sudo systemctl start bluetoothIf that doesn't work. Try forcing bluetoothctl to see LE devices specifically:
$: bluetoothctl # this will open the bluetooth cli menu scan clear transport le back scan on -
New: How to Increase Touchpad Sensitivity on Linux.
Adjust touchpad sensitivity settings for better responsiveness and control on Linux systems.
First, identify your touchpad device:
xinput listCheck current properties:
xinput list-props "Synaptics TM3381-002"Increase pointer sensitivity (range: -1.0 to 1.0):
xinput set-prop "Synaptics TM3381-002" "libinput Accel Speed" 1Make scrolling more sensitive:
xinput set-prop "Synaptics TM3381-002" "libinput Scrolling Pixel Distance" 10Once you have the correct values make it permanent across reboots by adding them to your startup scripts.
-
New: How to Disable Trackpoint on Linux.
The trackpoint (that red nub in the middle of ThinkPad keyboards) can be accidentally triggered while typing, or if you replace the keyboard it might make the mouse slide randomly without any user intervention. Here's how to disable it on Linux systems.
with xinput
Get the property of your trackpoint with
xinput list | grep -i trackThen disable the trackpoint with
xinput set-prop "TPPS/2 Elan TrackPoint" "Device Enabled" 0To make it persistent add that line to your desktop startup scripts. It can be run by a non privileged user.
With udev rules
The problem with this approach is that the event number may change across reboots
First, find your trackpoint in the system:
cat /proc/bus/input/devices | grep -A5 -B5 -i trackpointLook for entries like "TPPS/2 Elan TrackPoint" or similar. Note the event number (e.g.,
event14).Create a udev rule to ignore the trackpoint device:
sudo sh -c 'echo "KERNEL==\"event[0-9]*\", SUBSYSTEM==\"input\", ATTRS{name}==\"*TrackPoint*\", ENV{LIBINPUT_IGNORE_DEVICE}=\"1\"" > /etc/udev/rules.d/90-disable-trackpoint.rules'Replace
event[0-9]*with your specific event number if needed.Reload udev rules:
sudo udevadm control --reload-rules sudo udevadm triggerTest that the trackpoint no longer responds to input. The udev method typically works immediately and persists across reboots.
This may have an ugly side effect, that you won't any longer be able to use the touchpad mouse buttons. Sadly the trackpoint and those buttons are controlled by the same device.
The solution is to use the touchpad instead (see below).
Be able to click using the touchpad
Activate tap for clicking in touchpad by pasting following lines in /etc/X11/xorg.confg.d/30-touchpad.conf
Section "InputClass" Identifier "touchpad" Driver "libinput" MatchIsTouchpad "on" Option "Tapping" "on" Option "TappingButtonMap" "lrm" EndSectionThe
lrmmeans that:- 1 finger tap is a left click
- 2 finger tap is a right click
- 3 finger tap is a middle click
You'll need to logout and back in for the change to be applied.
-
New: Fix systemd-tmpfiles Detected unsafe path transition / → /dev during canonicalization of /dev.
with each system start
Here are some examples of the logs:
systemd-tmpfiles[248]: Detected unsafe path transition / → /dev during canonicalization of /dev. systemd-tmpfiles[485]: Detected unsafe path transition / → /var during canonicalization of /var. systemd-tmpfiles[485]: Detected unsafe path transition / → /var during canonicalization of /var/lib.This suggests that your root dir, /dev and /var are owned by differing users
i3wm⚑
-
New: Add i3wm python actions.
You can also use it with async
Create the connection object
from i3ipc import Connection, Event i3 = Connection()Focus on a window by it's class
tree = i3.get_tree() ff = tree.find_classed('Firefox')[0] ff.command('focus')
Watchtower⚑
-
New: Introduce Canary tokens.
Canary tokens are like motion sensors for your networks, computers and clouds. You can put them in folders, on network devices and on your phones.
Place them where nobody should be poking around and get a clear alarm if they are accessed. They are designed to look juicy to attackers to increase the likelihood that they are opened (and they are completely free).
Our Canarytokens are easy to sprinkle all over and forget about, until you get the notification that matters. They are super lightweight and don’t require installing software or running more background processes that can slow down your PC.
References
set -e if [ -z "$1" ] || [ -z "$2" ]; then echo "Usage: $0 <pv-name> <new-az>" exit 1 fi PV_NAME=$1 NEW_AZ=$2 Get Volume ID from PV VOLUME_ID=$(kubectl get pv $PV_NAME -o jsonpath='{.spec.csi.volumeHandle}') if [ -z "$VOLUME_ID" ]; then echo "Failed to get volume ID for PV $PV_NAME" exit 1 fi PVC_NAME=$(kubectl get pv $PV_NAME -o jsonpath="{.spec.claimRef.name}") NAMESPACE=$(kubectl get pv $PV_NAME -o jsonpath="{.spec.claimRef.namespace}") echo "PVC Name: $PVC_NAME" echo "Namespace: $NAMESPACE" echo "Found volume: $VOLUME_ID" SNAPSHOT_ID=$(aws ec2 create-snapshot --volume-id $VOLUME_ID --description "Migration for $PV_NAME" --query 'SnapshotId' --output text) echo "Snapshot created: $SNAPSHOT_ID" echo "Waiting for snapshot to be ready..." aws ec2 wait snapshot-completed --snapshot-ids $SNAPSHOT_ID echo "Snapshot $SNAPSHOT_ID is ready" VOLUME_TYPE=$(aws ec2 describe-volumes --volume-ids $VOLUME_ID --query 'Volumes[0].VolumeType' --output text) NEW_VOLUME_ID=$(aws ec2 create-volume --snapshot-id $SNAPSHOT_ID --availability-zone $NEW_AZ --volume-type $VOLUME_TYPE --query 'VolumeId' --output text) echo "New volume created: $NEW_VOLUME_ID" echo "Waiting for new volume to be available..." aws ec2 wait volume-available --volume-ids $NEW_VOLUME_ID echo "New volume $NEW_VOLUME_ID is ready" NEW_PV_NAME=${PV_NAME}-migrated cat <<EOF > new-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: $NEW_PV_NAME spec: capacity: storage: $(kubectl get pv $PV_NAME -o jsonpath='{.spec.capacity.storage}') volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: $(kubectl get pv $PV_NAME -o jsonpath='{.spec.storageClassName}') csi: driver: ebs.csi.aws.com volumeHandle: $NEW_VOLUME_ID fsType: ext4 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: topology.ebs.csi.aws.com/zone operator: In values: - $NEW_AZ EOF echo "New PV manifest generated: new-pv.yaml" kubectl apply -f new-pv.yaml echo "New PV $NEW_PV_NAME created" kubectl get pvc $PVC_NAME -n $NAMESPACE -o yaml > ${PVC_NAME}-backup.yaml echo "If you haven't size to 0 the statefulset, it is the time to kill the pod to rebind the PVC" kubectl delete pvc $PVC_NAME -n $NAMESPACE echo "Old PVC deleted" cat <<EOF > new-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: $PVC_NAME namespace: $NAMESPACE spec: accessModes: - ReadWriteOnce resources: requests: storage: $(kubectl get pv $NEW_PV_NAME -o jsonpath='{.spec.capacity.storage}') storageClassName: $(kubectl get pv $NEW_PV_NAME -o jsonpath='{.spec.storageClassName}') volumeName: $NEW_PV_NAME EOF echo "New PVC manifest generated: new-pvc.yaml" kubectl apply -f new-pvc.yaml echo "New PVC $PVC_NAME created and bound to new PV" kubectl delete pv $PV_NAME || echo "Failed to delete PV $PV_NAME, probably was not retained" echo "Old PV $PV_NAME deleted" echo "Deleting old volume $VOLUME_ID" aws ec2 delete-volume --volume-id $VOLUME_ID || echo "Failed to delete volume $VOLUME_ID, probably was not retained" echo "Old volume deleted" echo "Deleting snapshot $SNAPSHOT_ID" echo aws ec2 delete-snapshot --snapshot-id $SNAPSHOT_ID echo "Snapshot deleted" echo -e "Migration complete.\nNew PV: $NEW_PV_NAME\nNew PVC: $PVC_NAME" -
New: Error opening terminal: xterm-kitty.
The not so good solution but that solves the issue is
export TERM=xterm -
Correction: Change the default docker image.
From
containrrr/watchtowertonickfedor/watchtower. The official repo has not been updated in a while use this fork instead
Mobile Keyboards⚑
-
New: Introduce mobile keyboards comparison.
Finding the right mobile keyboard that balances functionality, privacy, and usability can be challenging. This guide explores the best open-source and privacy-focused keyboard options available for Android devices.
Quick Recommendations
- For Gboard users transitioning: HeliBoard
- For advanced features and AI: FUTO Keyboard
- For unique input method: Thumb-Key
- For future consideration: FlorisBoard (when stable)
FUTO Keyboard ⭐ Recommended
FUTO represents the cutting edge of privacy-focused keyboard technology, incorporating AI features while maintaining offline functionality.
What Makes FUTO Special
FUTO stands out with transformer-based predictions using llama.cpp and integrated voice input powered by whisper.cpp. Unlike other keyboards that require proprietary libraries, FUTO includes swipe/glide typing by default.
The keyboard is currently in pre-alpha, so expect some bugs and missing features. However, the privacy-preserving approach and innovative AI integration make it worth trying.
Key Features
Smart Text Prediction
- Uses pre-trained transformer models for intelligent autocorrect
- Personal language model that learns from your typing (locally only)
- Currently optimized for English, with other languages in development
- Spanish support is still limited
Privacy-First Design
- All AI processing happens on-device
- Your data never leaves your phone
- FUTO doesn't view or store any typing data
- Internet access only for updates and crash reporting (planned to be removed)
Customization Options
- Multilingual typing support
- Custom keyboard layouts
- Swipe typing works well out of the box
Current Limitations
- Pre-alpha software with occasional bugs
- Limited language support beyond English
- Uses a custom "Source First" license (not traditional open source)
- Screen movement issues when using swipe typing
Licensing Concerns
FUTO uses a custom license rather than traditional open source licenses like GPL. While the source code is available, the licensing terms are more restrictive than typical open source projects. The team promises to adopt proper open source licensing eventually, but this transition hasn't happened yet.
Resources
Not there yet
- The futo voice has a weird bug at least in spanish that sometimes adds at the end of the transcription phrases like: Subscribe! or Chau or Thanks for watching my video!. This is kind of annoying and scary
(¬º-°)¬
HeliBoard - The Reliable Choice
HeliBoard serves as an excellent middle ground, especially for users transitioning from Gboard.
Why Choose HeliBoard
- Active development: Fork of OpenBoard with regular updates
- No network access: Completely offline operation
- User-friendly: Much simpler than AnySoftKeyboard
- Gboard-like experience: Familiar interface for Google Keyboard users
Trade-offs
The main limitation is glide typing, which requires a closed-source library. This compromises the fully open source nature but provides the swipe functionality many users expect.
Resources
Thumb-Key - The Innovative Alternative
For users willing to try something completely different, Thumb-Key offers a unique approach to mobile typing.
The Thumb-Key Concept
Instead of traditional QWERTY, Thumb-Key uses a 3x3 grid layout with swipe gestures for less common letters. This design prioritizes:
- Large, predictable key positions
- Muscle memory development
- Eyes staying on the text area
- Fast typing speeds once mastered
Best For
- Users open to learning new input methods
- Those who prefer larger touch targets
- Privacy enthusiasts who want to avoid predictive text entirely
- People who find traditional keyboards cramped
The keyboard is highly configurable and focuses on accuracy through key positioning rather than AI predictions.
Resources
FlorisBoard - Future Potential
FlorisBoard shows promise but isn't ready for daily use yet.
Current Status
- Early beta development
- Planned integration with GrapheneOS
- Missing key features like suggestions and glide typing
- Limited documentation available
Worth Watching
While not currently recommended for primary use, FlorisBoard could become a strong contender once it reaches stability.
Resources
Alternative Approaches
Unexpected Keyboard
A minimalist keyboard with a unique layout approach.
Resources
Using Proprietary Keyboards with Restrictions
On privacy-focused ROMs like GrapheneOS and DivestOS, you can use proprietary keyboards while blocking internet access. However, this approach has limitations due to inter-process communication between apps.
Note: This method isn't foolproof, as apps can still potentially communicate through IPC mechanisms.
My Current Setup
After testing various options:
- Primary choice: FUTO Keyboard with swipe enabled
- Backup plan: Try FUTO voice input for longer texts when privacy features improve
- Alternative: Thumb-Key if FUTO doesn't work out
The main issue encountered is screen movement during swipe typing, which may be device-specific.
References and Further Reading
Wireguard⚑
-
New: Configure the kill switch.
You can configure a kill-switch in order to prevent the flow of unencrypted packets through the non-WireGuard interfaces, by adding the following two lines ‘PostUp‘ and ‘PreDown‘ lines to the ‘[Interface]‘ section:
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECTThe ‘PostUp’ and ‘PreDown’ fields have been added to specify an iptables command which, when used with interfaces that have a peer that specifies 0.0.0.0/0 as part of the ‘AllowedIPs’, works together with wg-quick’s fwmark usage in order to drop all packets that are either not coming out of the tunnel encrypted or not going through the tunnel itself. Note that this continues to allow most DHCP traffic through, since most DHCP clients make use of PF_PACKET sockets, which bypass Netfilter. When IPv6 is in use, additional similar lines could be added using ip6tables.
If you want to allow the traffic to your LAN while keeping your kill-switch you can use:
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && iptables -I OUTPUT -p tcp -d 192.168.0.0/24 -j ACCEPT PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && iptables -D OUTPUT -p tcp -d 192.168.0.0/24 -j ACCEPTHere I'm assuming that your LAN is defined by
192.168.0.0/24.One way to test if the kill switch works is by deleting the IP address from the wireguard interface
sudo ip a del [IP address] dev [interface]Where the
[IP address]can be seen using theip acommand.To gracefully recover from this, you will likely have to use the wg-quick command to take the connection down, then bring it back up.
-
New: User management.
Wireguard's default user management is not very user friendly as it's difficult to know which key belongs to what user.
I've been looking for WireGuard admin interface UI that is actively maintained but also isn't cloud-based and between all solutions I found wg-easy the best candidate because:
- It has just the features I need:
- Clean User management: add, remove, disable
- Clean UI interface
- Actively maintained: last commit 7h ago
- Really popular: 17.7k stars, 1.7k forks
- It's installable either with docker or docker-compose
- It exports prometheus metrics
- It has good documentation.
- It has an ansible playbook
- It has a grafana dashboard
- It has a clean release process
- It's a stable project: 3 years and 10 months old
If
wg-easydoesn't work, I'd look at the next projects: -
New: Introduce Rosenpass.
Rosenpass is free and open-source software based on the latest research in the field of cryptography. It is intended to be used with WireGuard VPN, but can work with all software that uses pre-shared keys. It uses two cryptographic methods (Classic McEliece and Kyber) to secure systems against attacks with quantum computers.
-
New: Introduce wg-easy.
wg-easyis the easiest way to install & manage WireGuard on any Linux hostthe easiest way to install & manage WireGuard on any Linux hostFeatures:
- All-in-one: WireGuard + Web UI.
- Easy installation, simple to use.
- List, create, edit, delete, enable & disable clients.
- Show a client's QR code.
- Download a client's configuration file.
- Statistics for which clients are connected.
- Tx/Rx charts for each connected client.
- Gravatar support.
- Automatic Light / Dark Mode
- Multilanguage Support
- One Time Links
- Client Expiration
- Prometheus metrics support
- IPv6 support
- CIDR support
References
-
New: Troubleshoot Failed to resolve interface "tun": No such device.
sudo apt purge resolvconf -
New: More wg-easy configurations.
Configuration
Keep in mind though that the
WG_ALLOWED_IPSonly sets the routes on the client, it does not limit the traffic at server level. For example, if you set172.30.1.0/24as the allowed ips, but the client changes it to172.30.0.0/16it will be able to access for example172.30.2.1. The suggested way to prevent this behavior is to add the kill switch in the Pre and Post hooks (WG_POST_UPandWG_POST_DOWN)Restrict Access to Networks with iptables
If you need to restrict many networks you can use this allowed ips calculator
Monitorization
If you want to use the prometheus metrics you need to use a version greater than 14, as
15is not yet released (as of 2025-03-20) I'm usingnightly.You can enable them with the environment variable
ENABLE_PROMETHEUS_METRICS=trueScrape the metrics
Add to your scrape config the required information
- job_name: vpn-admin metrics_path: /metrics static_configs: - targets: - {your vpn private ip}:{your vpn exporter port}Create the monitor client
To make sure that the vpn is working we'll add a client that is always connected. To do so we'll use linuxserver's wireguard docker
# References
diff --git a/mkdocs.yml b/mkdocs.yml index 77e5ba92b2..ef6703047d 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -86,6 +86,7 @@ nav: - Trip management: - Route management: route_management.md - Map management: map_management.md + - dawarich: dawarich.md - Food management: food_management.md - Stock management: - Grocy: grocy_management.md @@ -212,514 +213,522 @@ nav: - Fitness Tracker: - fitness_band.md - Amazfit Band 5: amazfit_band_5.md - - Coding: - - Learning to code: - - code_learning.md - - Frontend developer: frontend_learning.md - - Languages: - - Python: - - python.md - - Project Template: - - coding/python/python_project_template.md - - Command-line Project Template: - - coding/python/python_project_template/python_cli_template.md - - Configure SQLAlchemy for projects without flask: >- - coding/python/python_project_template/python_sqlalchemy_without_flask.md - - Flask Project Template: >- - coding/python/python_project_template/python_flask_template.md - - Microservices Project Template: >- - coding/python/python_project_template/python_microservices_template.md - - Common configurations: - - Create the documentation repository: >- - coding/python/python_project_template/python_docs.md - - Load config from YAML: coding/python/python_config_yaml.md - - Configure SQLAlchemy to use the MariaDB/Mysql backend: - >- - coding/python/python_project_template/python_sqlalchemy_mariadb.md - - Configure Docker to host the application: >- - coding/python/python_project_template/python_docker.md - - Libraries: - - Alembic: coding/python/alembic.md - - asyncio: asyncio.md - - aiocron: aiocron.md - - Apprise: apprise.md - - aiohttp: aiohttp.md - - BeautifulSoup: beautifulsoup.md - - Boto3: boto3.md - - Click: coding/python/click.md - - Dash: coding/python/dash.md - - Dash Leaflet: coding/python/dash_leaflet.md - - DeepDiff: coding/python/deepdiff.md - - FactoryBoy: coding/python/factoryboy.md - - Faker: coding/python/faker.md - - FastAPI: fastapi.md - - Flask: coding/python/flask.md - - Flask Restplus: coding/python/flask_restplus.md - - Folium: coding/python/folium.md - - Feedparser: coding/python/feedparser.md - - Gettext: gettext.md - - GitPython: coding/python/gitpython.md - - Goodconf: goodconf.md - - ICS: ics.md - - Inotify: python_inotify.md - - watchdog: watchdog_python.md - - Jinja2: python_jinja2.md - - Maison: maison.md - - mkdocstrings: coding/python/mkdocstrings.md - - NetworkX: networkx.md - - Org-rw: org_rw.md - - Pandas: coding/python/pandas.md - - Passpy: coding/python/passpy.md - - pexpect: pexpect.md - - Prompt Toolkit: - - coding/python/prompt_toolkit.md - - REPL: prompt_toolkit_repl.md - - Full screen applications: prompt_toolkit_fullscreen_applications.md - - Pydantic: - - coding/python/pydantic.md - - Pydantic Field Types: coding/python/pydantic_types.md - - Pydantic Validators: coding/python/pydantic_validators.md - - Pydantic Exporting Models: coding/python/pydantic_exporting.md - - Pydantic Validating Functions: coding/python/pydantic_functions.md - - Pydantic Factories: pydantic_factories.md - - Pydantic Mypy Plugin: coding/python/pydantic_mypy_plugin.md - - Pypika: coding/python/pypika.md - - psycopg2: psycopg2.md - - Elasticsearch: python_elasticsearch.md - - python-gnupg: python_gnupg.md - - Python Mysql: python_mysql.md - - pythonping: pythonping.md + - Technology: + - Coding: + - Learning to code: + - code_learning.md + - Frontend developer: frontend_learning.md + - Languages: + - Python: + - python.md + - Project Template: + - coding/python/python_project_template.md + - Command-line Project Template: + - coding/python/python_project_template/python_cli_template.md + - Configure SQLAlchemy for projects without flask: >- + coding/python/python_project_template/python_sqlalchemy_without_flask.md + - Flask Project Template: >- + coding/python/python_project_template/python_flask_template.md + - Microservices Project Template: >- + coding/python/python_project_template/python_microservices_template.md + - Common configurations: + - Create the documentation repository: >- + coding/python/python_project_template/python_docs.md + - Load config from YAML: coding/python/python_config_yaml.md + - Configure SQLAlchemy to use the MariaDB/Mysql backend: + >- + coding/python/python_project_template/python_sqlalchemy_mariadb.md + - Configure Docker to host the application: >- + coding/python/python_project_template/python_docker.md + - Libraries: + - Alembic: coding/python/alembic.md + - asyncio: asyncio.md + - aiocron: aiocron.md + - Apprise: apprise.md + - aiohttp: aiohttp.md + - BeautifulSoup: beautifulsoup.md + - Boto3: boto3.md + - Click: coding/python/click.md + - Dash: coding/python/dash.md + - Dash Leaflet: coding/python/dash_leaflet.md + - DeepDiff: coding/python/deepdiff.md + - FactoryBoy: coding/python/factoryboy.md + - Faker: coding/python/faker.md + - FastAPI: fastapi.md + - Flask: coding/python/flask.md + - Flask Restplus: coding/python/flask_restplus.md + - Folium: coding/python/folium.md + - Feedparser: coding/python/feedparser.md + - Gettext: gettext.md + - GitPython: coding/python/gitpython.md + - Goodconf: goodconf.md + - ICS: ics.md + - Inotify: python_inotify.md + - watchdog: watchdog_python.md + - Jinja2: python_jinja2.md + - Maison: maison.md + - mkdocstrings: coding/python/mkdocstrings.md + - NetworkX: networkx.md + - Org-rw: org_rw.md + - Pandas: coding/python/pandas.md + - Passpy: coding/python/passpy.md + - pexpect: pexpect.md + - Prompt Toolkit: + - coding/python/prompt_toolkit.md + - REPL: prompt_toolkit_repl.md + - Full screen applications: prompt_toolkit_fullscreen_applications.md + - Pydantic: + - coding/python/pydantic.md + - Pydantic Field Types: coding/python/pydantic_types.md + - Pydantic Validators: coding/python/pydantic_validators.md + - Pydantic Exporting Models: coding/python/pydantic_exporting.md + - Pydantic Validating Functions: coding/python/pydantic_functions.md + - Pydantic Factories: pydantic_factories.md + - Pydantic Mypy Plugin: coding/python/pydantic_mypy_plugin.md + - Pypika: coding/python/pypika.md + - psycopg2: psycopg2.md + - Elasticsearch: python_elasticsearch.md + - python-gnupg: python_gnupg.md + - Python Mysql: python_mysql.md + - pythonping: pythonping.md + - Python Prometheus: python-prometheus.md + - Python Telegram: + - python-telegram.md + - pytelegrambotapi: pytelegrambotapi.md + - Python VLC: python_vlc.md + - Playwright: playwright.md + - Plotly: coding/python/plotly.md + - questionary: questionary.md + - rich: rich.md + - Ruamel YAML: coding/python/ruamel_yaml.md + - Selenium: selenium.md + - Streamlit: streamlit.md + - SQLAlchemy: coding/python/sqlalchemy.md + - sqlite3: sqlite3.md + - Redis-py: coding/python/redis-py.md + - Requests: requests.md + - Requests-mock: coding/python/requests_mock.md + - Rq: coding/python/rq.md + - python_systemd: python_systemd.md + - sh: python_sh.md + - Talkey: talkey.md + - Tenacity: tenacity.md + - TinyDB: coding/python/tinydb.md + - Torch: torch.md + - Typer: typer.md + - Yoyo: coding/python/yoyo.md + - Type Hints: coding/python/type_hints.md + - Logging: python_logging.md + - Code Styling: coding/python/python_code_styling.md + - Docstrings: coding/python/docstrings.md + - Properties: python_properties.md + - Protocols: python_protocols.md + - Package Management: + - python_package_management.md + - PDM: pdm.md + - pipx: pipx.md + - Pipenv: pipenv.md + - Poetry: python_poetry.md + - Lazy loading: lazy_loading.md + - Plugin System: python_plugin_system.md + - Profiling: python_profiling.md + - Optimization: python_optimization.md + - Anti-Patterns: coding/python/python_anti_patterns.md + - Pytest: + - coding/python/pytest.md + - Parametrized testing: coding/python/pytest_parametrized_testing.md + - Pytest-cases: coding/python/pytest_cases.md + - Pytest-HttpServer: pytest_httpserver.md + - Pytest-xprocess: pytest-xprocess.md + - Internationalization: python_internationalization.md + - Python Snippets: coding/python/python_snippets.md + - Data Classes: coding/python/data_classes.md + - Vue.js: + - vuejs.md + - Vue snippets: vue_snippets.md + - Vuetify: vuetify.md + - Development tools: + - Cypress: cypress.md + - Vite: vite.md + - Vitest: vitest.md + - Bash: + - Bash snippets: bash_snippets.md + - Bash testing: bats.md + - lua: lua.md + - JSON: coding/json/json.md + - SQL: coding/sql/sql.md + - SQLite: sqlite.md + - YAML: coding/yaml/yaml.md + - Promql: coding/promql/promql.md + - Logql: logql.md + - HTML: html.md + - CSS: css.md + - Javascript: + - coding/javascript/javascript.md + - Javascript snippets: javascript_snippets.md + - MermaidJS: mermaidjs.md + - Latex: latex.md + - Graphql: graphql.md + - Qwik: qwik.md + - nodejs: linux/nodejs.md + - JWT: devops/jwt.md + - React: coding/react/react.md + - Coding tools: + - IDES: + - Vim: + - vim.md + - Vim configuration: + - vim_config.md + - Vim Keymaps: vim_keymaps.md + - Vim Package Manager: + - vim_plugin_managers.md + - LazyVim: lazyvim.md + - Packer: vim_packer.md + - UI management configuration: + - Vim foldings: vim_foldings.md + - Vim movement: vim_movement.md + - Tabs vs Buffers: vim_tabs.md + - File management configuration: + - NeoTree: neotree.md + - Telescope: telescope.md + - fzf.nvim: fzf_nvim.md + - Editing specific configuration: + - vim_editor_plugins.md + - Vim formatters: vim_formatters.md + - Vim autocomplete: vim_completion.md + - Vim markdown: vim_markdown.md + - Vim spelling: vim_spelling.md + - Vim autosave: vim_autosave.md + - Coding specific configuration: + - vim_coding_plugins.md + - Treesitter: vim_treesitter.md + - LSP: vim_lsp.md + - Snippets: luasnip.md + - DAP: vim_dap.md + - Git management configuration: + - vim_git.md + - Diffview: diffview.md + - gitsigns: gitsigns.md + - Testing management configuration: vim_testing.md + - Email management: vim_email.md + - Other Vim Plugins: + - linux/vim/vim_plugins.md + - Vim Snippets: vim_snippets.md + - Vim Troubleshooting: vim_troubleshooting.md + - Neovim Plugin Development: vim_plugin_development.md + - Vi vs Vim vs Neovim: vim_vs_neovim.md + - Tridactyl: tridactyl.md + - VSCodium: vscodium.md + - Coding with AI: ai_coding.md + - Git: + - git.md + - Github cli: gh.md + - Forgejo: forgejo.md + - Gitea: gitea.md + - Radicle: radicle.md + - Data orchestrators: + - data_orchestrators.md + - Kestra: kestra.md + - memorious: memorious.md + - Scrapers: + - morph.io: morph_io.md + - ETL: + - Singer: singer.md + - Espanso: espanso.md + - Generic Coding Practices: + - How to code: how_to_code.md + - Program Versioning: + - versioning.md + - Semantic Versioning: semantic_versioning.md + - Calendar Versioning: calendar_versioning.md + - Use warnings to evolve your code: use_warnings.md + - Keep a Changelog: changelog.md + - Writing good documentation: documentation.md + - Conventional comments: conventional_comments.md + - TDD: coding/tdd.md + - GitOps: gitops.md + - Abstract Syntax Trees: abstract_syntax_trees.md + - Software Architecture: + - SOLID: architecture/solid.md + - Domain Driven Design: + - architecture/domain_driven_design.md + - Repository Pattern: architecture/repository_pattern.md + - Service Layer Pattern: architecture/service_layer_pattern.md + - Architecture Decision Record: adr.md + - Database Architecture: architecture/database_architecture.md + - ORM, Query Builder or Raw SQL: architecture/orm_builder_query_or_raw_sql.md + - Microservices: architecture/microservices.md + - Restful APIS: architecture/restful_apis.md + - OCR: + - Table parsing: + - Camelot: camelot.md + - Frontend Development: frontend_development.md + - Park programming: park_programming.md + - Sponsor: sponsor.md + - Issues: issues.md + - DevSecOps: + - devops/devops.md + - Infrastructure as Code: + - Helm: + - devops/helm/helm.md + - Helm Installation: devops/helm/helm_installation.md + - Helm Commands: devops/helm/helm_commands.md + - Helm Secrets: devops/helm/helm_secrets.md + - Helm Git: helm_git.md + - Helmfile: devops/helmfile.md + - Terraform: terraform.md + - Ansible: + - Ansible Snippets: ansible_snippets.md + - Molecule: molecule.md + - Nix: nix.md + - Dotfiles: + - dotfiles.md + - Home Manager: home-manager.md + - Chezmoi: chezmoi.md + - Dotdrop: dotdrop.md + - Infrastructure Solutions: + - Kubernetes: + - devops/kubernetes/kubernetes.md + - Architecture: devops/kubernetes/kubernetes_architecture.md + - Resources: + - Namespaces: devops/kubernetes/kubernetes_namespaces.md + - Pods: devops/kubernetes/kubernetes_pods.md + - ReplicaSets: devops/kubernetes/kubernetes_replicasets.md + - Deployments: devops/kubernetes/kubernetes_deployments.md + - Horizontal Pod Autoscaling: >- + devops/kubernetes/kubernetes_hpa.md + - Volumes: devops/kubernetes/kubernetes_volumes.md + - Services: devops/kubernetes/kubernetes_services.md + - Labels: devops/kubernetes/kubernetes_labels.md + - Annotations: devops/kubernetes/kubernetes_annotations.md + - Ingress: devops/kubernetes/kubernetes_ingress.md + - Jobs: devops/kubernetes/kubernetes_jobs.md + - Kubectl: + - devops/kubectl/kubectl.md + - Kubectl Installation: devops/kubectl/kubectl_installation.md + - Kubectl Commands: devops/kubectl/kubectl_commands.md + - Additional Components: + - Metrics Server: devops/kubernetes/kubernetes_metric_server.md + - Ingress Controller: >- + devops/kubernetes/kubernetes_ingress_controller.md + - External DNS: devops/kubernetes/kubernetes_external_dns.md + - Cluster Autoscaler: >- + devops/kubernetes/kubernetes_cluster_autoscaler.md + - Dashboard: devops/kubernetes/kubernetes_dashboard.md + - Storage Driver: devops/kubernetes/kubernetes_storage_driver.md + - Vertical Pod Autoscaler: >- + devops/kubernetes/kubernetes_vertical_pod_autoscaler.md + - Networking: devops/kubernetes/kubernetes_networking.md + - Debugging: kubernetes_debugging.md + - Backups: + - Velero: velero.md + - Operators: devops/kubernetes/kubernetes_operators.md + - Tools: + - devops/kubernetes/kubernetes_tools.md + - Krew: krew.md + - Ksniff: ksniff.md + - Mizu: mizu.md + - AWS: + - devops/aws/aws.md + - AWS Snippets: aws_snippets.md + - AWS Savings plan: aws_savings_plan.md + - Security groups workflow: devops/aws/security_groups.md + - EKS: devops/aws/eks.md + - EFS: efs.md + - IAM: + - devops/aws/iam/iam.md + - IAM Commands: devops/aws/iam/iam_commands.md + - IAM Debugging: devops/aws/iam/iam_debug.md + - S3: devops/aws/s3.md + - WAF: aws_waf.md + - Databases: + - Redis: architecture/redis.md + - RabbitMQ: rabbitmq.md + - Continuous Deployment: + - ArgoCD: argocd.md + - Continuous Integration: + - devops/ci.md + - Drone: drone.md + - Linters: + - Alex: devops/alex.md + - Flakeheaven: flakeheaven.md + - Flake8: devops/flake8.md + - Markdownlint: devops/markdownlint.md + - Proselint: devops/proselint.md + - Shellcheck: shellcheck.md + - Yamllint: devops/yamllint.md + - Write Good: devops/write_good.md + - Formatters/Fixers: + - Black: devops/black.md + - Yamlfix: yamlfix.md + - Pyment: pyment.md + - mdformat: mdformat.md + - Type Checkers: + - Mypy: devops/mypy.md + - Security Checkers: + - pip-audit: pip_audit.md + - Bandit: devops/bandit.md + - Safety: devops/safety.md + - Dependency managers: + - Pip-tools: devops/pip_tools.md + - Automating Processes: + - copier: copier.md + - cookiecutter: linux/cookiecutter.md + - cruft: linux/cruft.md + - renovate: renovate.md + - letsencrypt: letsencrypt.md + - Threat modeling: + - Privacy threat modeling: privacy_threat_modeling.md + - Storage: + - storage.md + - NAS: nas.md + - OpenZFS: + - linux/zfs.md + - OpenZFS storage planning: zfs_storage_planning.md + - Sanoid: sanoid.md + - ZFS Prometheus exporter: zfs_exporter.md + - Hard drive health: + - hard_drive_health.md + - Smartctl: smartctl.md + - badblocks: badblocks.md + - Resilience: + - linux_resilience.md + - Memtest: memtest.md + - watchdog: watchdog.md + - Magic keys: magic_keys.md + - Monitoring: + - Monitoring Comparison: monitoring_comparison.md + - Prometheus: + - devops/prometheus/prometheus.md + - Architecture: devops/prometheus/prometheus_architecture.md + - Prometheus Operator: devops/prometheus/prometheus_operator.md + - Prometheus Install: devops/prometheus/prometheus_installation.md + - AlertManager: devops/prometheus/alertmanager.md + - Blackbox Exporter: devops/prometheus/blackbox_exporter.md + - cAdvisor: cadvisor.md + - Elasticsearch Exporter: elasticsearch_exporter.md + - Node Exporter: devops/prometheus/node_exporter.md + - Process Exporter: process_exporter.md - Python Prometheus: python-prometheus.md - - Python Telegram: - - python-telegram.md - - pytelegrambotapi: pytelegrambotapi.md - - Python VLC: python_vlc.md - - Playwright: playwright.md - - Plotly: coding/python/plotly.md - - questionary: questionary.md - - rich: rich.md - - Ruamel YAML: coding/python/ruamel_yaml.md - - Selenium: selenium.md - - Streamlit: streamlit.md - - SQLAlchemy: coding/python/sqlalchemy.md - - sqlite3: sqlite3.md - - Redis-py: coding/python/redis-py.md - - Requests: requests.md - - Requests-mock: coding/python/requests_mock.md - - Rq: coding/python/rq.md - - python_systemd: python_systemd.md - - sh: python_sh.md - - Talkey: talkey.md - - Tenacity: tenacity.md - - TinyDB: coding/python/tinydb.md - - Torch: torch.md - - Typer: typer.md - - Yoyo: coding/python/yoyo.md - - Type Hints: coding/python/type_hints.md - - Logging: python_logging.md - - Code Styling: coding/python/python_code_styling.md - - Docstrings: coding/python/docstrings.md - - Properties: python_properties.md - - Protocols: python_protocols.md - - Package Management: - - python_package_management.md - - PDM: pdm.md - - pipx: pipx.md - - Pipenv: pipenv.md - - Poetry: python_poetry.md - - Lazy loading: lazy_loading.md - - Plugin System: python_plugin_system.md - - Profiling: python_profiling.md - - Optimization: python_optimization.md - - Anti-Patterns: coding/python/python_anti_patterns.md - - Pytest: - - coding/python/pytest.md - - Parametrized testing: coding/python/pytest_parametrized_testing.md - - Pytest-cases: coding/python/pytest_cases.md - - Pytest-HttpServer: pytest_httpserver.md - - Pytest-xprocess: pytest-xprocess.md - - Internationalization: python_internationalization.md - - Python Snippets: coding/python/python_snippets.md - - Data Classes: coding/python/data_classes.md - - Vue.js: - - vuejs.md - - Vue snippets: vue_snippets.md - - Vuetify: vuetify.md - - Development tools: - - Cypress: cypress.md - - Vite: vite.md - - Vitest: vitest.md - - Bash: - - Bash snippets: bash_snippets.md - - Bash testing: bats.md - - lua: lua.md - - JSON: coding/json/json.md - - SQL: coding/sql/sql.md - - SQLite: sqlite.md - - YAML: coding/yaml/yaml.md - - Promql: coding/promql/promql.md - - Logql: logql.md - - HTML: html.md - - CSS: css.md - - Javascript: - - coding/javascript/javascript.md - - Javascript snippets: javascript_snippets.md - - MermaidJS: mermaidjs.md - - Latex: latex.md - - Graphql: graphql.md - - Qwik: qwik.md - - nodejs: linux/nodejs.md - - JWT: devops/jwt.md - - React: coding/react/react.md - - Coding tools: - - IDES: - - Vim: - - vim.md - - Vim configuration: - - vim_config.md - - Vim Keymaps: vim_keymaps.md - - Vim Package Manager: - - vim_plugin_managers.md - - LazyVim: lazyvim.md - - Packer: vim_packer.md - - UI management configuration: - - Vim foldings: vim_foldings.md - - Vim movement: vim_movement.md - - Tabs vs Buffers: vim_tabs.md - - File management configuration: - - NeoTree: neotree.md - - Telescope: telescope.md - - fzf.nvim: fzf_nvim.md - - Editing specific configuration: - - vim_editor_plugins.md - - Vim formatters: vim_formatters.md - - Vim autocomplete: vim_completion.md - - Vim markdown: vim_markdown.md - - Vim spelling: vim_spelling.md - - Vim autosave: vim_autosave.md - - Coding specific configuration: - - vim_coding_plugins.md - - Treesitter: vim_treesitter.md - - LSP: vim_lsp.md - - Snippets: luasnip.md - - DAP: vim_dap.md - - Git management configuration: - - vim_git.md - - Diffview: diffview.md - - gitsigns: gitsigns.md - - Testing management configuration: vim_testing.md - - Email management: vim_email.md - - Other Vim Plugins: - - linux/vim/vim_plugins.md - - Vim Snippets: vim_snippets.md - - Vim Troubleshooting: vim_troubleshooting.md - - Neovim Plugin Development: vim_plugin_development.md - - Vi vs Vim vs Neovim: vim_vs_neovim.md - - Tridactyl: tridactyl.md - - VSCodium: vscodium.md - - Coding with AI: ai_coding.md - - Git: - - git.md - - Github cli: gh.md - - Forgejo: forgejo.md - - Gitea: gitea.md - - Radicle: radicle.md - - Data orchestrators: - - data_orchestrators.md - - Kestra: kestra.md - - memorious: memorious.md - - Scrapers: - - morph.io: morph_io.md - - ETL: - - Singer: singer.md - - Espanso: espanso.md - - Generic Coding Practices: - - How to code: how_to_code.md - - Program Versioning: - - versioning.md - - Semantic Versioning: semantic_versioning.md - - Calendar Versioning: calendar_versioning.md - - Use warnings to evolve your code: use_warnings.md - - Keep a Changelog: changelog.md - - Writing good documentation: documentation.md - - Conventional comments: conventional_comments.md - - TDD: coding/tdd.md - - GitOps: gitops.md - - Abstract Syntax Trees: abstract_syntax_trees.md - - Software Architecture: - - SOLID: architecture/solid.md - - Domain Driven Design: - - architecture/domain_driven_design.md - - Repository Pattern: architecture/repository_pattern.md - - Service Layer Pattern: architecture/service_layer_pattern.md - - Architecture Decision Record: adr.md - - Database Architecture: architecture/database_architecture.md - - ORM, Query Builder or Raw SQL: architecture/orm_builder_query_or_raw_sql.md - - Microservices: architecture/microservices.md - - Restful APIS: architecture/restful_apis.md - - OCR: - - Table parsing: - - Camelot: camelot.md - - Frontend Development: frontend_development.md - - Park programming: park_programming.md - - Sponsor: sponsor.md - - Issues: issues.md - - DevSecOps: - - devops/devops.md - - Infrastructure as Code: - - Helm: - - devops/helm/helm.md - - Helm Installation: devops/helm/helm_installation.md - - Helm Commands: devops/helm/helm_commands.md - - Helm Secrets: devops/helm/helm_secrets.md - - Helm Git: helm_git.md - - Helmfile: devops/helmfile.md - - Terraform: terraform.md - - Ansible: - - Ansible Snippets: ansible_snippets.md - - Molecule: molecule.md - - Nix: nix.md - - Dotfiles: - - dotfiles.md - - Home Manager: home-manager.md - - Chezmoi: chezmoi.md - - Dotdrop: dotdrop.md - - Infrastructure Solutions: - - Kubernetes: - - devops/kubernetes/kubernetes.md - - Architecture: devops/kubernetes/kubernetes_architecture.md - - Resources: - - Namespaces: devops/kubernetes/kubernetes_namespaces.md - - Pods: devops/kubernetes/kubernetes_pods.md - - ReplicaSets: devops/kubernetes/kubernetes_replicasets.md - - Deployments: devops/kubernetes/kubernetes_deployments.md - - Horizontal Pod Autoscaling: >- - devops/kubernetes/kubernetes_hpa.md - - Volumes: devops/kubernetes/kubernetes_volumes.md - - Services: devops/kubernetes/kubernetes_services.md - - Labels: devops/kubernetes/kubernetes_labels.md - - Annotations: devops/kubernetes/kubernetes_annotations.md - - Ingress: devops/kubernetes/kubernetes_ingress.md - - Jobs: devops/kubernetes/kubernetes_jobs.md - - Kubectl: - - devops/kubectl/kubectl.md - - Kubectl Installation: devops/kubectl/kubectl_installation.md - - Kubectl Commands: devops/kubectl/kubectl_commands.md - - Additional Components: - - Metrics Server: devops/kubernetes/kubernetes_metric_server.md - - Ingress Controller: >- - devops/kubernetes/kubernetes_ingress_controller.md - - External DNS: devops/kubernetes/kubernetes_external_dns.md - - Cluster Autoscaler: >- - devops/kubernetes/kubernetes_cluster_autoscaler.md - - Dashboard: devops/kubernetes/kubernetes_dashboard.md - - Storage Driver: devops/kubernetes/kubernetes_storage_driver.md - - Vertical Pod Autoscaler: >- - devops/kubernetes/kubernetes_vertical_pod_autoscaler.md - - Networking: devops/kubernetes/kubernetes_networking.md - - Debugging: kubernetes_debugging.md - - Backups: - - Velero: velero.md - - Operators: devops/kubernetes/kubernetes_operators.md - - Tools: - - devops/kubernetes/kubernetes_tools.md - - Krew: krew.md - - Ksniff: ksniff.md - - Mizu: mizu.md - - AWS: - - devops/aws/aws.md - - AWS Snippets: aws_snippets.md - - AWS Savings plan: aws_savings_plan.md - - Security groups workflow: devops/aws/security_groups.md - - EKS: devops/aws/eks.md - - EFS: efs.md - - IAM: - - devops/aws/iam/iam.md - - IAM Commands: devops/aws/iam/iam_commands.md - - IAM Debugging: devops/aws/iam/iam_debug.md - - S3: devops/aws/s3.md - - WAF: aws_waf.md + - Instance sizing analysis: devops/prometheus/instance_sizing_analysis.md + - Prometheus Troubleshooting: >- + devops/prometheus/prometheus_troubleshooting.md + - Grafana: grafana.md + - Log analysis: + - Loki: + - loki.md + - Logcli: logcli.md + - Promtail: promtail.md + - Graylog: graylog.md + - Elastic Security: elastic_security.md + - SIEM: siem.md - Databases: - - Redis: architecture/redis.md - - RabbitMQ: rabbitmq.md - - Continuous Deployment: - - ArgoCD: argocd.md - - Continuous Integration: - - devops/ci.md - - Drone: drone.md - - Linters: - - Alex: devops/alex.md - - Flakeheaven: flakeheaven.md - - Flake8: devops/flake8.md - - Markdownlint: devops/markdownlint.md - - Proselint: devops/proselint.md - - Shellcheck: shellcheck.md - - Yamllint: devops/yamllint.md - - Write Good: devops/write_good.md - - Formatters/Fixers: - - Black: devops/black.md - - Yamlfix: yamlfix.md - - Pyment: pyment.md - - mdformat: mdformat.md - - Type Checkers: - - Mypy: devops/mypy.md - - Security Checkers: - - pip-audit: pip_audit.md - - Bandit: devops/bandit.md - - Safety: devops/safety.md - - Dependency managers: - - Pip-tools: devops/pip_tools.md - - Automating Processes: - - copier: copier.md - - cookiecutter: linux/cookiecutter.md - - cruft: linux/cruft.md - - renovate: renovate.md - - letsencrypt: letsencrypt.md - - Threat modeling: - - Privacy threat modeling: privacy_threat_modeling.md - - Storage: - - storage.md - - NAS: nas.md - - OpenZFS: - - linux/zfs.md - - OpenZFS storage planning: zfs_storage_planning.md - - Sanoid: sanoid.md - - ZFS Prometheus exporter: zfs_exporter.md - - Hard drive health: - - hard_drive_health.md - - Smartctl: smartctl.md - - badblocks: badblocks.md - - Resilience: - - linux_resilience.md - - Memtest: memtest.md - - watchdog: watchdog.md - - Magic keys: magic_keys.md - - Monitoring: - - Monitoring Comparison: monitoring_comparison.md - - Prometheus: - - devops/prometheus/prometheus.md - - Architecture: devops/prometheus/prometheus_architecture.md - - Prometheus Operator: devops/prometheus/prometheus_operator.md - - Prometheus Install: devops/prometheus/prometheus_installation.md - - AlertManager: devops/prometheus/alertmanager.md - - Blackbox Exporter: devops/prometheus/blackbox_exporter.md - - cAdvisor: cadvisor.md - - Elasticsearch Exporter: elasticsearch_exporter.md - - Node Exporter: devops/prometheus/node_exporter.md - - Process Exporter: process_exporter.md - - Python Prometheus: python-prometheus.md - - Instance sizing analysis: devops/prometheus/instance_sizing_analysis.md - - Prometheus Troubleshooting: >- - devops/prometheus/prometheus_troubleshooting.md - - Grafana: grafana.md - - Log analysis: - - Loki: - - loki.md - - Logcli: logcli.md - - Promtail: promtail.md - - Graylog: graylog.md - - Elastic Security: elastic_security.md - - SIEM: siem.md - - Databases: - - PostgreSQL: - - postgres.md - - Postgres operators: - - postgres_operators.md - - Zalando Postgres operator: zalando_postgres_operator.md - - elasticsearch: linux/elasticsearch.md - - Oracle Database: oracle_database.md - - Authentication: - - Authentik: authentik.md - - API Management: - - devops/api_management.md - - Kong: devops/kong/kong.md - - Scrum: - - scrum.md - - Templates: - - Refinement Template: refinement_template.md - - Hardware: - - CPU: cpu.md - - RAM: - - ram.md - - ECC RAM: - - ecc.md - - rasdaemon: rasdaemon.md - - Power Supply Unit: psu.md - - GPU: gpu.md - - Pedal PC: pedal_pc.md - - Pentesting: pentesting.md - - Operating Systems: - - Linux: - - linux.md - - Linux Snippets: linux_snippets.md - - Distros: - - Libreelec: libreelec.md - - Tails: tails.md - - Recovery tools: - - finnix: finnix.md - - Security tools: - - fail2ban: linux/fail2ban.md - - pass: pass.md - - Wireshark: wireshark.md - - Canary tokens: canary_tokens.md + - PostgreSQL: + - postgres.md + - Postgres operators: + - postgres_operators.md + - Zalando Postgres operator: zalando_postgres_operator.md + - elasticsearch: linux/elasticsearch.md + - Oracle Database: oracle_database.md + - Authentication: + - Authentik: authentik.md + - API Management: + - devops/api_management.md + - Kong: devops/kong/kong.md + - Scrum: + - scrum.md + - Templates: + - Refinement Template: refinement_template.md + - Hardware: + - CPU: cpu.md + - RAM: + - ram.md + - ECC RAM: + - ecc.md + - rasdaemon: rasdaemon.md + - Power Supply Unit: psu.md + - GPU: gpu.md + - Pedal PC: pedal_pc.md + - Pentesting: pentesting.md + - Operating Systems: + - Linux: + - linux.md + - Linux Snippets: linux_snippets.md + - Distros: + - Libreelec: libreelec.md + - Tails: tails.md + - Recovery tools: + - finnix: finnix.md + - Security tools: + - fail2ban: linux/fail2ban.md + - pass: pass.md + - Wireshark: wireshark.md + - Canary tokens: canary_tokens.md
-
- Sysadmin tools:
-
- brew: linux/brew.md
-
- detox: detox.md
-
- Docker: docker.md
-
- Watchtower: watchtower.md
-
- Dynamic DNS: dynamicdns.md
-
- goaccess: goaccess.md
-
- Gotify: gotify.md
-
- HAProxy: linux/haproxy.md
-
- journald: journald.md
-
- LUKS: linux/luks/luks.md
-
- Outrun: outrun.md
-
- rm: linux/rm.md
-
- sed: sed.md
-
- Syncthing: linux/syncthing.md
-
- Tahoe-LAFS: tahoe.md
-
- Wake on Lan: wake_on_lan.md
-
- Wireguard:
-
- linux/wireguard.md
-
- wg-easy: wg-easy.md
-
- yq: yq.md
-
- zip: linux/zip.md
-
- Window manager tools:
-
- dunst: dunst.md
-
- ferdium: ferdium.md
-
- i3wm: i3wm.md
-
- rofi: rofi.md
-
- User tools:
-
- Browsers:
-
- google chrome: linux/google_chrome.md
-
- Chromium: chromium.md
-
- Hushboard: husboard.md
-
- Peek: peek.md
-
- Terminals:
-
- terminal_comparison.md
-
- Alacritty: alacritty.md
-
- Wezterm: wezterm.md
-
- Kitty: kitty.md
-
- Instant messaging apps:
-
- Delta Chat: deltachat.md
-
- Simplex Chat: simplexchat.md
-
- Sysadmin tools:
-
- brew: linux/brew.md
-
- detox: detox.md
-
- Docker: docker.md
-
- Watchtower: watchtower.md
-
- Dynamic DNS: dynamicdns.md
-
- goaccess: goaccess.md
-
- Gotify: gotify.md
-
- HAProxy: linux/haproxy.md
-
- journald: journald.md
-
- LUKS: linux/luks/luks.md
-
- Outrun: outrun.md
-
- rm: linux/rm.md
-
- sed: sed.md
-
- Syncthing: linux/syncthing.md
-
- Tahoe-LAFS: tahoe.md
-
- Wake on Lan: wake_on_lan.md
-
- Wireguard:
-
- linux/wireguard.md
-
- wg-easy: wg-easy.md
-
- yq: yq.md
-
- zip: linux/zip.md
-
- Window manager tools:
-
- dunst: dunst.md
-
- ferdium: ferdium.md
-
- i3wm: i3wm.md
-
- rofi: rofi.md
-
- User tools:
-
- Browsers:
-
- google chrome: linux/google_chrome.md
-
- Chromium: chromium.md
-
- Hushboard: husboard.md
-
- Peek: peek.md
-
- Terminals:
-
- terminal_comparison.md
-
- Alacritty: alacritty.md
-
- Wezterm: wezterm.md
-
- Kitty: kitty.md
-
- Instant messaging apps:
-
- Delta Chat: deltachat.md
-
- Simplex Chat: simplexchat.md
-
- Android:
-
- Android Tips: android_tips.md
-
- OS:
-
- GrapheneOS: grapheneos.md
-
- FuriOS: furios.md
-
- Apps:
-
- Cone: cone.md
-
- GadgetBridge: gadgetbridge.md
-
- LibreTube: libretube.md
-
- HappyCow: happycow.md
-
- ICSx5: icsx5.md
-
- Orgzly: orgzly.md
-
- OsmAnd: osmand.md
-
- Seedvault: seedvault.md
-
- Android SDK Platform tools: android_sdk.md
-
- Android:
-
- Android Tips: android_tips.md
-
- OS:
-
- GrapheneOS: grapheneos.md
-
- FuriOS: furios.md
-
- Apps:
-
- Cone: cone.md
-
- GadgetBridge: gadgetbridge.md
-
- LibreTube: libretube.md
-
- HappyCow: happycow.md
-
- ICSx5: icsx5.md
-
- Orgzly: orgzly.md
-
- OsmAnd: osmand.md
-
- Seedvault: seedvault.md
-
- Android SDK Platform tools: android_sdk.md
-
- Hardware:
-
- Redox: redox.md
-
- Vial: vial.md
-
- Rock64: rock64.md
-
- Filosofía:
-
- filosofía.md
-
- Amor: amor.md
- Arts:
- Writing:
- writing/writing.md @@ -746,9 +755,6 @@ nav:
- Calistenia: calistenia.md
- Aerial Silk: aerial_silk.md
- Meditation: meditation.md
- Writing:
-
- Maker:
-
- Redox: redox.md
-
- Vial: vial.md
- Sudokus: sudokus.md
- Drawing:
- drawing/drawing.md @@ -811,6 +817,7 @@ nav:
- Board Games:
- board_games.md
- Regicide: regicide.md
-
- Monologues: monologues.md
- Projects: projects.md
- Contact: contact.md
-
Hardware⚑
Kobo⚑
-
New: Kobo Forma suddenly drains battery in days.
The solution was to a factory restore, disconnect the wifi and detect which ebook is being the problem.
Some of the user suggestions - My battery started doing this many 3 times now. Every time it was a certain book. The first time was something I downloaded from the kobo store, I removed my recent batch of downloads and it was fine (still not sure which is the offending book tbh). The next two were sideloaded borrowbox books. I think its something to do with a download not being downloaded correctly/having all the correct information. The kobos system tries to keep getting the information it wants but fails and keeps doing this in a cycle causing it to drain the battery
- I haven't bought any new books lately, but I've been reading a ton of library books. About when the battery drain started, I had returned a few books directly from the Kobo, instead of doing it on my phone in the Libby app. That's got to be the problem.
I disabled the wifi, turned off my Kobo, turned it back on, then restarted the wifi, BUT I did it in Settings, instead of with the quick toggle at the top of the Home screen (when you touch the wifi signal strength indicator). It took longer for the wifi to fully connect than it had all the times I've tried this over the last week. Then I did a Sync, which went through quickly.
It's been 8+ hours, and my Kobo is still at 89%, which is where it was when I started the process!
-
i did find out that it wasn't going to sleep or powering off if I just closed the cover anymore. It will sleep/ power off if the cover is not closed. I haven't changed any settings so I'm not sure why this started happening. I've had to power it off manually before closing the cover to make sure there is no battery drain.
-
Basically, the Forma will either go to sleep on its own or I'll put it to sleep and the battery is at some reasonable amount. Usually overnight, but once while it was sitting on the desk next to me for about an hour, the battery will drain down to nothing. It will shut all the way down and sometimes display the warning that it needs charging. Other times, I don't even get that warning.
References
Rock64⚑
-
New: Install Debian in a rock64.
- Go to the rock64 wiki page to get the download directory for the debian version you want to install
- Download
firmware.rock64-rk3328.img.gzandpartition.img.gz - Combine the 2 parts into 1 image file:
zcat firmware.rock64-rk3328.img.gz partition.img.gz > debian-installer.img - Write the created .img file to microSD card or eMMC Module using dd:
dd if=debian-installer.img of=/dev/sda bs=4M. Replace/dev/sdawith your target drive. - Plug the microSD/eMMC card in the Rock64 (and connect a serial console, or keyboard and monitor) and boot up to start the Debian Installer
Notes:
- An Ethernet connection is required for the above installer
- Remember to leave some space before your first partition for u-boot! You can do this by creating a 32M size unused partition at the start of the device.
- Auto creating all partitions does not work. You can use the following manual partition scheme:
#1 - 34MB Unused/Free Space #2 - 512MB ext2 /boot (Remember to set the bootable flag) #3 - xxGB ext4 / (This can be as large as you want. You can also create separate partitions for /home /var /tmp) #4 - 1GB swap (May not be a good idea if using an SD card)
Software tools⚑
autorandr⚑
-
New: Introduce autorandr.
autorandr is a command line tool to automatically select a display configuration based on connected devices.
Installation
apt-get install autorandrUsage
Save your current display configuration and setup with:
autorandr --save mobileConnect an additional display, configure your setup and save it:
autorandr --save dockedNow autorandr can detect which hardware setup is active:
$ autorandr mobile docked (detected)To automatically reload your setup:
$ autorandr --changeTo manually load a profile:
$ autorandr --load <profile>or simply:
$ autorandr <profile>
Liberaforms⚑
-
New: Usage of liberaforms.
Marked true
If you are a forms admin you can mark the answers which can be used to for example state that those have been checked and are not trolls, the user can then edit the answers through the magic link but it's not very probable that it becomes a trollo
Send edit email
If you want the users to receive an email with the magic link so that they can edit their answers you need to add a "Short text" field of type "Email" and then in the Options you need to enable the setting to send the users the magic link
Extract the results through API
Each form at the bottom of the Options tab has a section of API endpoints, once enabled you can extract them with curl:
BASE_URL=https://forms.komun.org form_id=478 curl -sqH "Authorization: Bearer ${JWT_TOKEN}" "${BASE_URL}/api/form/${form_id}/answers"That will give you an answer similar to:
{ "answers": [ { "created": "2025-04-25T15:05:02.384121", "data": { "radio-group-1712945984567": "Hitzaldia--Charla--Xerrada", "radio-group-1713092876455": "55", "radio-group-1713373271313": "Castellano_1", "radio-group-1713382758036": "Si_1", "radio-group-1744968040362-0": "d8b35c755d9d41e2a844a344ae2494d6", "text-1712945594310": "Historia de la criptograf\u00eda", "text-1712945631444": "", "text-1712945663611": "user", "text-1712947404812": "user@sindominio.net", "text-1744967213162-0": "Divulgativa", "text-1744967571620-0": "Ninguno", "textarea-1712945646944": "Aproximaci\u00f3n hist\u00f3rica a la criptograf\u00eda, desde la Antig\u00fcedad a d\u00eda de hoy", "textarea-1712945755946": "", "textarea-1712945806547": "HDMI", "textarea-1712945865865": "Privacidad, criptograf\u00eda, matem\u00e1ticas, historia", "textarea-1713380502724": "" }, "form": 478, "id": 36148, "marked": false } ], "meta": {} } `` As you can see the fields have weird names, to get the details of each field you can do the same request but to `${BASE_URL}/api/form/${form_id}` instead of `${BASE_URL}/api/form/${form_id}/answers` ```json { "form": { "created": "2025-04-25T11:45:43.633038", "introduction_md": "# Call4Nodes Hackmeeting 2025", "slug": "call4nodes-hackmeeting-2025-cas", "structure": [ { "className": "form-control", "label": "T\u00edtulo", "name": "text-1712945594310", "required": true, "subtype": "text", "type": "text" }, { "className": "form-control", "label": "Descripci\u00f3n", "name": "textarea-1712945646944", "required": true, "type": "textarea" }, ...
Filosofía⚑
-
- En el episodio de Ver al otro: narrativa y democracia de punzadas sonoras hacen un análisis muy interesante sobre cómo la literatura puede ser un mecanismo muy potente de transformación social.
-
Cómo transformar
- Punzadas Sonoras: Ver al otro: narrativa y democracia: cómo la literatura puede ser un mecanismo muy potente de transformación social.
- Punzadas Sonoras: Límite: lugar de enunciación: El límite como elemento transformador
Deseo
Otros
- Punzadas Sonoras: Artesano y artista: desnaturalizar la distinción: Capítulo super interesante para analizar las dinámicas de poder en el mundo laboral, el concepto de mingei, pensar sobre "el arte de programar", ...
Recomiendan dos libros interesantes:
- La belleza del objeto cotidiano - Soetsu Yanagi
- Costumbres en común - E. P. Thompson
- Punzadas Sonoras: Matar al Autor: el destino del texto: No infantilizar al receptor
-
New: Nuevos capítulos relevantes sobre el tiempo y otras cosas.
- Punzadas Sonoras: Mirar atrás: un gesto íntimo: Ruptura de la concepción lineal del tiempo, aplicado entre otras cosas a las relaciones. Hablan también sobre el artículo de Leonor Cervantes, Ya no te gusto como antes que da una perspectiva muy chula sobre las relaciones.
- Punzadas Sonoras: Artesano y artista: desnaturalizar la distinción: Capítulo super interesante para analizar las dinámicas de poder en el mundo laboral, el concepto de mingei, pensar sobre "el arte de programar", ...
- Punzadas Sonoras: El ritmo del habitar con Blanca Lacasa: Filosofando sobre la casa, en especial sobre la cocina como jaula y reino. También sobre la relación entre madres e hijas
Amor⚑
-
En el episodio Amor no correspondido: ¿Por qué? de punzadas sonoras hacen un análisis muy interesante. Puedes escucharlo directamente desde aquí.
Arts⚑
Cooking⚑
Cooking Basics⚑
-
New: Todos los cortes para una cebolla.
Picada
- Dividir la cebolla por la raiz
- apoyar cada mitad en la tabla con la raiz en perpendicular a ti
- cortes grandes perpendicular a la raiz
- poner la raiz en tu dirección
- cortes grandes paralelos a la tabla
- cortes grandes perpendiculares a la tabla
Juliana
- Dividir la cebolla por la raiz
- apoyar cada mitad en la tabla con la raiz en paralelo a ti
- cortes al gusto de grosor perpendiculares a la tabla
Media luna
- Dividir la cebolla por la raiz
- apoyar cada mitad en la tabla con la raiz en perpendicular a ti
- cortes al gusto de grosor perpendiculares a la tabla
Brunoise
- Dividir la cebolla por la raiz
- apoyar cada mitad en la tabla con la raiz en paralelo a ti
- cortes al gusto de grosor perpendiculares a la tabla sin llegar hasta el final
- poner la raiz en perpendicular
- cortes al gusto de grosor paralelos a la tabla sin llegar hasta el final
- cortes al gusto de grosor perpendiculares a la tabla
Discos
- Sin dividir la cebolla, poner la raiz paralela a la tabla y en perpendicular a ti
- cortes al gusto de grosor perpendiculares a la tabla
Aros
- Corte de discos
- desmontar los discos
Calistenia⚑
-
New: Introduce calistenia.
Técnica básica
Dominadas
Referencias
Vídeos
-
New: Sentadilla búlgara.
-
New: Shrimp squat vs pistol squat.
https://gmb.io/shrimp-squats-vs-pistol-squats/ https://www.reddit.com/r/bodyweightfitness/comments/5loylk/pistol_squats_vs_shrimp_squats/
-
New: Nordic curl.
Languages⚑
Galego⚑
-
New: Descubrimiento de os arquivos da meiga.
- Os arquivos da meiga: Foro de contenido en galego
Esperanto⚑
-
New: Introduce esperanto.
Personal notes on the 38C3 Esperanto gathering
- The language has 130 years
- It's difficult to reform: There is no defined process to change the language and they don't like newbies to propose changes although there is an academy of esperanto
- There are no different past, conditional tenses
- There is no conjugation, they always use the pronouns
Anki decks
There is this nice deck also available from the ankiweb site
References
Science⚑
Artificial Intelligence⚑
OCR⚑
-
New: Add ocr references.
-
New: Add deekseek ocr tool.
Text to speech⚑
-
New: Add link to Exploring the World of Open-Source Text-to-Speech Models article.
- Exploring the World of Open-Source Text-to-Speech Models diff --git a/docs/vim_plugin_development.md b/docs/vim_plugin_development.md index 8b70311a61..1fa882233a 100644 --- a/docs/vim_plugin_development.md +++ b/docs/vim_plugin_development.md feat(vim_plugin_development): Explain how to load a plugin in a local directory with lazy
You can manually edit those files to develop new feature or fix issues on the plugins. Or if you're developing them in a directory you can specify the
dirdirective in the lazy loading.
Birding⚑
-
New: Introduce android apps for birding.
- whoBIRD
- Merlin Bird ID: I've seen it working and it's amazing, I'm however trying first whoBIRD as it's in F-droid
Brain food⚑
Galas⚑
Galas 2025⚑
-
New: Añadir las galas de 2025.
Los premios al mejor contenido de 2025.
- Mejores libros de 2025
- Mejores series de 2025
- Mejores películas de 2025
- Mejores podcasts de 2025
- Mejores videojuegos de 2025
Mejores libros de 2025
- Treatise on efficacy de François Jullien
- El deseo según Gilles Deleuze de Maite Larrauri
- Looking Backward, 2000-1887 de Edward Bellamy
** Transformadores**
Qué hacen un crack dentro de ti que te transforma
# Treatise on efficacy de François Jullien

- disfrute: ★★★★★
- año: 2004
- género: filosofía
- longitud: 211 páginas
Mi libro favorito del año sin duda. Lo he leído 2 veces y media. Mucho subrayado, muchas notas al margen. Es uno de esos libros que te dan unas nuevas gafas para ver el mundo de otra forma.
Ha transformado completamente mi concepción del tiempo, la eficacia, eficiencia, acción, transformación, oportunidad, ...
Eso si, es lectura densa, pero muy muy recomendable.
Me encanta el puente que nos regala Jullien del pensamiento chino vivido y entendido desde la experiencia de ojos europeos.
** El deseo según Gilles Deleuze de Maite Larrauri**

- disfrute: ★★★★★
- año: 2000
- género: filosofía
- longitud: 94 páginas
Lo \"leí\" cómo nunca antes había leído un libro, simplemente maravilloso. Después lo he leído otras 2 veces.
Maite hace accesible a profanos conceptos clave de Deleuze sobre el deseo.
Capitalismo libidinal

- disfrute: ★★★★★
- año: 2024
- género: filosofía
- longitud: 224 páginas
Libro maravilloso que me abrió la mente a otra concepción del tiempo y del deseo.
La vida se ha hecho mercado. Como si fuese nuestra segunda naturaleza, nos movemos en Uber, viajamos con Airbnb, ligamos en Tinder, compramos en Glovo, nos entretenemos en Netflix, hablamos de nosotros mismos en el lenguaje del capital humano.
Esta segunda naturaleza, que Amador Fernández-Savater llama capitalismo libidinal, nos promete la felicidad, pero lo que produce realmente es sufrimiento y malestar, en forma de precariedad, endeudamiento y dolor psíquico. Paradójicamente, la derecha parece hoy más eficaz que nadie para canalizar esa desesperación y su fuerza de rechazo (Trump, Bolsonaro, Milei), mientras que las estrategias de comunicación y las políticas de contención de la izquierda se muestran insuficientes.
¿Es posible reapropiarnos de nuestro malestar como energía de transformación social? Será necesario aprender a escuchar y hablar el lenguaje del cuerpo, imaginar y activar políticas del deseo.
Utopía no es una isla de Layla Martínez

- disfrute: ★★★★★
- año: 2020
- género: ensayo
- longitud: 212 páginas
Una guía de vida, lo he vuelto a leer después de 3 o 4 años. Este libro fue el que me dio las energías para volver a militar tras un largo periodo de hartazgo y desilusión. También me abrió en su momento el camino a buscar la utopía.
El libro te atrapa y es un viaje maravilloso.
The Life-Changing Magic of Tidying Up de Marie Kondo

- disfrute: ★★★★☆
- año: 2014
- género: autoayuda
- longitud: 231 páginas
Me pilló en la mudanza y de nuevo quitando toda la mierda capitalista y las grilladas japas, tiene ideas de trasfondo muy interesantes.
Japanese cleaning consultant Marie Kondo takes tidying to a whole new level, promising that if you properly simplify and organize your house once, you'll never have to do it again. Most methods advocate a room-by-room approach, which doom you to pick away at your piles of stuff forever. The KonMari Method, with its revolutionary category-by-category system, leads to lasting results. In fact, none of Kondo's clients have lapsed (and she still has a three-month wait list).
With detailed guidance for determining which items in your house \"spark joy\" (and which don\'t), this international best-seller featuring Tokyo\'s newest lifestyle phenomenon will help you clear your clutter and enjoy the unique magic of a tidy home - and the calm, motivated mindset it can inspire.
The Bullet Journal method de Ryder Carroll

- disfrute: ★★★★☆
- año: 2018
- género: autoayuda
- longitud: 320 páginas
Un libro que me ha dado muchísimas ideas, pero para disfrutarlo tienes que ser capaz de obviar toda la mierda que lo rodea: su speech sobre su vida típico de libro de autoayuda, su enfoque belicista épico, horrenda visión individualista de que somos culpables de nuestras situaciones y que sólo nosotros podemos arreglarlas con una salida individual.
For years, Ryder Carroll tried countless organizing systems, online and off, but none of them fit the way his mind worked. Out of sheer necessity, he developed a method called the Bullet Journal that helped him become consistently focused and effective. When he started sharing his system with friends who faced similar challenges, it went viral. Just a few years later, to his astonishment, Bullet Journaling is a global movement. The Bullet Journal Method is about much more than organizing your notes and to-do lists. It\'s about what Carroll calls \"intentional living\": weeding out distractions and focusing your time and energy in pursuit of what\'s truly meaningful, in both your work and your personal life. It\'s about spending more time with what you care about, by working on fewer things. Carroll wrote this book for frustrated list-makers, overwhelmed multitaskers, and creatives who need some structure. Whether you\'ve used a Bullet Journal for years or have never seen one before, The Bullet Journal Method will help you go from passenger to pilot of your own life.
Unf*ck Your Habitat de Rachel Hoffman

- disfrute: ★★★★☆
- año: 2017
- género: autoayuda
- longitud: 256 páginas
Impresionantes
Looking Backward, 2000-1887 de Edward Bellamy

- disfrute: ★★★★★
- año: 1888!!
- género: novela, sci-fy
- longitud: 276 páginas
Ha despertado en mi sentimientos e ideas como no lo ha hecho un libro en mucho tiempo. Sobretodo la sorpresa de que mucho de la actualidad es realmente muy antigua.
Empecé leyéndolo en inglés pero no me enteraba de nada. El estilo tampoco me estaba enganchando. Hasta que descubrí que se escribió en el 1888!!! También ayuda a entender aquellas cosas que rechinan, como hablar sólo \"del hombre\".
Es impresionante y triste que ya imagina ideas socialistas aún inalcanzables que me siguen emocionando aún en 2025...
Flipo con la perspectiva feminista de los cuidados, la visión del trabajo, el antipunitivismo (imaginaba ya un mundo sin cárceles) ya en 1888. Me sorprende que habla de desahucios en masa, fraudes millonarios, especulaciones con productos de primera necesidad. Luego tiene ciertos campos en los que es un poco mas meh: la sanidad pública está un poco retrasada, la parte romántica es horrenda, es bastante clasista y tecnócrata. Y es muy curioso a la par que gracioso, con todo los avances que ha sido capaz de imaginar, que no pudiese ver el final de la religión.
El flipe se me relajó cuando vi que el manifiesto comunista se había escrito 40 años antes. Y luego vino el efecto rebote. En vez de ser una lectura inspiradora, que también, me está entrando un poco de desesperación por la mierda de mundo que me está tocando vivir, lo lejos que estamos de un mundo bello de vivir, y a la velocidad a la que nos estamos alejando
Es una bonita crónica del despertar comunista cuando se te cae la venda de los ojos. Apaga la tele, enciende la mente. En el paseo por el barrio pobre hace una bonita descripción del quitar la venda de la deshumanización del pobre como vía de acabar con dicha opresión.
Me ha gustado mucho el final y el tierno epílogo, se mascaba la revolución rusa ya en el 1888 Qué pena que el capitalismo saliese victorioso... Dónde estaríamos ahora si no...
A man being put into a hypnotic sleep, is awakened 113 years later to an entirely new social structure.
Entretenidos
Un pasatiempos agradable. Para días tontos.
Ñu de Pau luque

- disfrute: ★★★★☆
- año: 2024
- género: filosofía
Pau tiene una curiosa manera de filosofar, más cercana a la gente.Se lee muy bien, te ríes bastante a menudo. Eso si, no tiene un sólo capítulo, y eso hace que sea muy jodido dejar de leerlo.
Me hace mucha gracia Curiel, pienso que me gustaría conocerla pero luego pienso, que menudo vértigo!
«Lo más sospechoso de las soluciones es que se las encuentra siempre que se quiere.» Esta frase de Rafael Sánchez Ferlosio abre un libro excepcional, tan brillante como inclasificable. Entre el relato, la autobiografía y el ensayo filosófico, Pau Luque convoca una galería de personajes extravagantes y tiernos para pensar con ellos la incertidumbre que caracteriza a toda existencia humana: bellos italianos de oficio desconocido, boxeadores frustrados, adolescentes con dentaduras caóticas, poetas clandestinas, émulos de san Ignacio de Loyola, filósofos abrumados por las cuestiones prácticas más triviales o swingers confundidos vagan y divagan por las calles de Barcelona, Génova, Ciudad de México o Vilafranca del Penedès. Son criaturas mugrientas y deslumbrantes que se enfrentan a problemas cotidianos pero también trascendentales.
Frente a las soluciones simples (el ñu que se suele utilizar en los crucigramas en español para rellenar huecos --casi un chiste entre los aficionados--) y a las recetas de manual, Pau Luque hace una loa a los secretos, los equívocos, los errores e incluso las contradicciones. Ñu es un iluminador libro de antiayuda.
We have always lived in the castle de Shirley Jackson

- disfrute: ★★★★☆
- año: 1962
- género: novela, drama
- longitud: 187 páginas
Recomendado por Layla Martinez, autora de utopía no es una isla y carcoma, el estilo ne recuerda mucho a Aury en the slow regard of silent things.
La casa tiene mucha presencia como en carcoma. Al principio hubo un momento que casi lo dejo, pero me alegro de haberlo acabado.
El estilo es curioso y agradable de leer, me cuadra que le guste a layla.
Shirley Jackson\'s beloved gothic tale of a peculiar girl named Merricat and her family\'s dark secret
Taking readers deep into a labyrinth of dark neurosis, We Have Always Lived in the Castle is a deliciously unsettling novel about a perverse, isolated, and possibly murderous family and the struggle that ensues when a cousin arrives at their estate.
Una belleza terrible de Edurne Portela

- disfrute: ★★★★☆
- año: 2025
- género: novela histórica
- longitud: 335 páginas
Una mirada novelada a la historia de mujeres troskistas. Al final engancha pero me costaron un poco ciertas partes.
Algunos pensamientos que fui apuntando mientras lo leía:
- Ay qué ilusión me dio ver que había sacado un nuevo libro. Al leer la descripción no me pude contener y empecé a dar palmadas y emitir ruidos nerviosos.
- Ya lo tengo entre mis manos, huele bien, como todos los de gutenberg.
- El primer capítulo me encanta, no solo leo su obra sino que encima la acompaño en esta aventura. Me recuerda un poco a su reflexión con maddie y las fronteras pero está vez está acompañada y la reflexión es más madura. Debe ser que Maddie le dio fuerzas. Al decir que compartía casa con jose ovejero me puse a buscar como loco si eran pareja, y luego pensé, qué más da!
- De nuevo, como en sus otros libros, el estilo de la escritura te atrapa, ay qué bien tener estas 300 páginas por delante.
- Me gusta cómo se centra en las historias de ellas incluso cuando están rodeadas de peña tan importante
- Me gusta mucho su filosofía de escritura, el imaginar sobre el inventar incluso a costa de potencial novelístico. Creo que tiene mucho más fuerza y es mucho más respetuosa por las historias verdaderas que intentan representar.
- Sus finales que siempre son maravillosos Ganas de comentarla con henar, caps y rosa
A Deadly Education de Naomi Novik

- disfrute: ★★★★☆
- año: 2020
- género: novela fantasía mágica
- longitud: 320 páginas
Crea un mundo muy interesante, es como un harry potter más oscuro.
Mejores series de 2025
Este 2025 mis series favoritas han sido:
Impactantes
Contenido original, que deja su huella.
Say nothing

- disfrute: ★★★★★
- año: 2024
- género: drama
- longitud: 9 episodios
Ambientada a lo largo de toda la historia del IRA, es interesante que no cuentan la versión de los vencedores, sin dejar de ser crítica con los vencidos. Me sorprendieron mucho los paralelismos con ETA.
Cada capítulo es una joya. Probablemente la mejor serie que he visto en 2025.
Adolescencia

- disfrute: ★★★★★
- año: 2024
- género: drama, crimen
- longitud: 4 episodios largos e intensos
Serie que engancha muy fuertemente, que nos hace reflexionar sobre la relación adulto - joven, el papel de la tecnología y a donde tiende el mundo.
Creo que es una serie que deberían de ver todos los padres y madres con sus hijes, también en los institutos. Creo que se podrían generar unos debates muy ricos.
El formato también es muy especial. La grabación contínua hace que la acción no te deje un respiro. Yo pensaba que el formato de grabación contínua lo conseguían haciendo cortes y uniéndolo con efectos gráficos, pero no. Cada capítulo está grabado de un tirón, y si se equivocan, a empezar de nuevo. Flipo.
Cómo actua todo el reparto es impresionante.
El único pero que le pongo es que sólo muestra una visión muy pesimista de la juventud, en la que es fácil caer en la desesperación de que no tienen remedio y nos vamos al carajo. Que aunque es cierto, también hay otra gran cantidad de jóvenes que vienen pisando con mucha fuerza sobre los que podríamos aprender mucho.
El mundo de una familia se pone patas arriba cuando Jamie Miller, de 13 años, es arrestado y acusado de asesinar a una compañera de clase. Los cargos contra su hijo les obliga a enfrentarse a la peor pesadilla de cualquier padre.
Tiernas
Contenido de domingo noche, aquel que te abraza, que es calentito, en el que los personajes se convierten parte de tu familia.
Doctor en Alaska

- disfrute: ★★★★★
- año: 1990
- género: comedia, drama, fantasía
- longitud: 110 episodios
Zona de comfort absoluta que ha envejecido genial. Después de verla hace unos años, he revisitado capítulos sueltos con la familia. Si no la has visto es un acompañamiento bueno para el invierno.
Joel Fleishman es un médico recién licenciado. Debido a una cláusula de la letra pequeña del contrato de su beca acaba en la remota y descaradamente extraña ciudad de Cicely (Alaska). Cordialmente bienvenido por el fundador de Cicely, Maurice Minnifield, antiguo astronauta de la NASA, y por el resto de la peña de inadaptados y excéntricos que forma el vecindario, Joel descubre que es cada vez más difícil abandonar la ciudad a la que inconscientemente ha llegado. Todo se complica por la presencia de Maggie O'Connell, alcaldesa de Cicely y piloto de la localidad, una mujer hermosa pero completamente independiente; ambientado todo ello por el musical y filosófico programa de radio presentado por Chris en la KBHR.
Graciosas
Contenido que arranca carcajadas
Animal S01

- disfrute: ★★★★★
- año: 2025
- género: comedia
- longitud: 9 episodios
La serie más graciosa que he visto en el año. Carcajada tras carcajada, la retranca gallega en todo su explendor. Mais unha pena que non fose en galego :(
Antón, un veterinario sin un duro, acepta trabajar en una tienda de mascotas de lujo y pasa de cuidar animales en el campo a vender cucaditas y caprichos para perros mimados.
División Palermo S02

- disfrute: ★★★★☆
- año: 2025
- género: comedia
- longitud: 6 episodios
Quizá es por la pérdida de novedad, pero no me reí tanto como con la primera. Aun así tiene muchos puntos muy muy graciosos.
Una Guardia Urbana inclusiva, ideada como operación de marketing para mejorar la imagen de las fuerzas de seguridad, descubrirá algo que no debía y se enfrentará con unos extraños narcos.
The Bear S03

- disfrute: ★★★★☆
- año: 2024
- género: comedia, drama
- longitud: 6 episodios
Su majestad S01

- disfrute: ★★★★☆
- año: 2025
- género: comedia
- longitud: 7 episodios
Me la vi casi del tirón en un momento de bajona anímica. Es muy divertido cómo se mete con los borbones, pero duele que consigue que al final empatices con ella. Maldito cine.
Cuando se ve salpicado por un escándalo financiero, el rey Alfonso XIV decide apartarse de la primera línea pública durante unos meses. Pero alguien debe quedarse al frente de la institución. No hay más alternativa que su única hija, Pilar. Ahora la princesa tendrá que demostrarle al país que no es la irresponsable, vaga e inútil que todos creen. Lo que pasa es que igual tienen razón.
Entretenidas
Un pasatiempos agradable. Para días tontos.
Detective Touré

- disfrute: ★★★★☆
- año: 2024
- género: comedia, misterio
- longitud: 6 episodios
Es bastante divertida y está genial cómo meten el contenido político
Touré, un inmigrante guineano asentado en Bilbao, se gana la vida como improvisado detective de poca monta. Su pericia, intuición y particular sentido del humor le hacen ganarse la confianza de la Ertzaintza y le sumergen en una compleja investigación en la que se enfrentará a múltiples peligros: conocerá de cerca la corrupción inmobiliaria, los tejemanejes del conflictivo barrio de San Francisco e incluso se topará con la mafia nigeriana.
Andor S02

- disfrute: ★★★★☆
- año: 2025
- género: comedia, crímen
- longitud: 12 episodios
No le llega a la suela del zapato de la primera temporada (que si no has visto, merece mucho la pena). Pero sigue siendo entretenida, empieza flojilla y va ganando con los episodios.
Son bonitos los guiños a Palestina
Eric

- disfrute: ★★★★☆
- año: 2025
- género: drama, misterio
- longitud: 6 episodios
Un padre desesperado y un tenaz policía luchan contra sus propios demonios en la Nueva York de los años 80 mientras buscan a su hijo de nueve años, que ha desaparecido.
La residencia S01

- disfrute: ★★★★☆
- año: 2025
- género: comedia, crímen
- longitud: 8 episodios
Empieza un poco flojilla pero con el tiempo va mejorando.
Una excéntrica y brillante inspectora investiga un crimen en la Casa Blanca, donde todos los empleados e invitados a la cena de Estado esconden un secreto y podrían ser el asesino.
Mejores películas de 2025
Este año mis películas favoritas han sido:
Es curioso cómo ha pesado el cine antíguo frente al moderno.
Transformadoras
Barrio

- disfrute: ★★★★★
- año: 1998
- género: drama
- longitud: 1h34m
En uno de esos barrios situados al sur de las grandes ciudades, a los que no llega ni el metro ni el dinero, Javi, Manu y Rai son compañeros de instituto, pero, sobre todo, amigos. Tienen esa edad en la que ni se es hombre ni se es niño, en la que se habla mucho de chicas y muy poco con ellas. Comparten también la vida en el barrio, el calor del verano y un montón de problemas. El primero es el propio barrio, un lugar de grandes bloques de viviendas sociales, de ladrillo oscuro y arquitectura deprimente y depresiva. Allí hay pocas cosas que hacer, y en agosto aún menos. El centro de la ciudad queda lejos y las comunicaciones son malas, así que los tres amigos pasan la mayor parte del tiempo por las calles del barrio.[
No Other Land

- disfrute: ★★★★★
- año: 2024
- género: documental
- longitud: 1h25m
Podría haber caído tanto en impactante como en transformadora. Una producción muy necesaria tras más de un año de genocidio en Palestina...
Un joven activista palestino documenta durante cinco años los violentos desalojos de palestinos en Cisjordania. Con la colaboración de cineastas israelíes y palestinos, narra la compleja relación que surge entre él y un periodista israelí dispuesto a unirse a su lucha, a pesar de sus diferentes circunstancias. Una obra conmovedora que explora resistencia, empatía y esperanza en un contexto de conflicto.
Cómo tener sexo

- disfrute: ★★★★☆
- año: 2023
- género: drama
- longitud: 1h31m
Película que sería interesante ver con los jóvenes ya sea en casa o en los institutos. Aborda muy bien el tema del consentimiento.
Tres adolescentes británicas se van de vacaciones para celebrar sus ritos de iniciación: beber, salir de fiesta y ligar, en lo que debería ser el mejor verano de sus vidas.
Graciosas
Contenido que arranca carcajadas.
Arsénico por compasión

- disfrute: ★★★★★
- año: 1944
- género: comedia, crimen
- longitud: 1h58m
La vi dos veces del tirón en una semana, me parece una obra maestra. Siento que ya no se hacen comedias como esta. El guión es maravilloso, y la traducción también. Aunque soy un aférrimo defensor de ver las películas en versión original, esta recomendaría verla en castellano.
Un crítico teatral que acaba de casarse decide visitar a sus ancianas tías antes de marcharse de luna de miel. Durante la visita descubrirá que las encantadoras viejecitas tienen una manera muy peculiar de practicar la caridad.
El gran dictador

- disfrute: ★★★★★
- año: 1940
- género: comedia, bélica
- longitud: 2h05m
El discurso final es memorable.
Un humilde barbero judío tiene un parecido asombroso con el dictador de la nación Tomania, que promete sacar adelante y que culpa a los judíos de la situación del país. El dictador ataca al país fronterizo, pero es confundido con el barbero por sus propios guardias, siendo ingresado en un campo de concentración. Simultáneamente, el pobre barbero es confundido con el dictador...
Tiempos modernos

- disfrute: ★★★★★
- año: 1936
- género: comedia, drama, romance
- longitud: 1h27m
Es muy triste que sea tan actual...
Extenuado por el frenético ritmo de la cadena de montaje, un obrero metalúrgico acaba perdiendo la razón. Después de recuperarse en un hospital, sale y es encarcelado por participar en una manifestación en la que se encontraba por casualidad. En la cárcel, también sin pretenderlo, ayuda a controlar un motín, gracias a lo cual queda en libertad. Una vez fuera, reemprende la lucha por la supervivencia en compañía de una joven huérfana a la que conoce en la calle.
El apartamento

- disfrute: ★★★★★
- año: 1960
- género: comedia, drama, romance
- longitud: 2h05m
Otra comedia impresionante, con crítica social.
C.C. Baxter es un modesto pero ambicioso empleado de una compañía de seguros de Manhattan. Está soltero y vive solo en un discreto apartamento que presta ocasionalmente a sus superiores para sus citas amorosas. Tiene la esperanza de que estos favores le sirvan para mejorar su posición en la empresa. Pero la situación cambia cuando se enamora de una ascensorista que resulta ser la amante de uno de los jefes que usan su apartamento.
La vida segun Philomena Cunk

- disfrute: ★★★★☆
- año: 2024
- género: comedia
- longitud: 1h10m
Conmovedoras
Coco

- disfrute: ★★★★★
- año: 2017
- género: animación, aventura
- longitud: 1h49m
Segunda vez que la veo y no pierde. Al principio me dio mucho miedo de lo que podía hacer disney con méxico, pero la verdad es que es una obra de arte.
Un joven aspirante a músico llamado Miguel se embarca en un viaje extraordinario a la mágica tierra de sus ancestros. Allí, el encantador embaucador Héctor se convierte en su inesperado amigo y le ayuda a descubrir los misterios detrás de las historias y tradiciones de su familia.
Entretenidas
Un pasatiempos agradable. Para días tontos.
La princesa prometida

- disfrute: ★★★★★
- año: 1987
- género: Aventura, comedia, fantasía, romance
- longitud: 1h45m
Me resistí mucho tiempo a verla porque tenía miedo que fuese una ñoñería de amor romantico. Que tristemente lo es xD, pero todo lo demás hace que sea una maravilla. Los diálogos, las coreografías, los personajes... Eso si, prepárate para exasperarte ante el papel de princesa objeto inútil.
Me volvió a despertar el sentimiento de \"ya no se hacen películas como estas\". Tiene un algo que lo despierta (además de la retrogradez machista)
Después de buscar fortuna durante cinco años, Westley retorna a su tierra para casarse con su amada, la bella Buttercup, a la que había jurado amor eterno. Sin embargo, para recuperarla habrá de enfrentarse a Vizzini y sus esbirros. Una vez derrotados éstos, tendrá que superar el peor de los obstáculos: el príncipe Humperdinck pretende desposar a la desdichada Buttercup, pese a que ella no lo ama, ya que sigue enamorada de Westley.
Kneecap

- disfrute: ★★★★★
- año: 2024
- género: Comedia, drama
- longitud: 1h45m
Me encanta que sean ellos mismos los que actuan, la historia está muy chula y el grupo mola todo.
En Irlanda hay 80.000 hablantes de irlandés, 6.000 viven en el norte y tres de ellos lo van a poner todo patas arriba cuando formen un trío de rap llamado Kneecap. Anárquicos, salvajes y dispuestos a todo para salvar su lengua materna.
Inside Out 2

- disfrute: ★★★★★
- año: 2024
- género: animación, aventura, comedia
- longitud: 1h36m
Riley entra en la adolescencia y el Cuartel General de su cabeza sufre una repentina reforma para hacerle hueco a algo totalmente inesperado propio de la pubertad: ¡nuevas emociones! Alegría, Tristeza, Ira, Miedo y Asco, con años de impecable gestión a sus espaldas (según ellos...) no saben muy bien qué sentir cuando aparece con enorme ímpetu Ansiedad. Y no viene sola: le acompañan envidia, vergüenza y aburrimiento.
El tercer hombre

- disfrute: ★★★★★
- año: 1949
- género: misterio, suspense
- longitud: 1h44m
Orson Welles es Harry Lime y Joseph Cotten actúa como su amigo de la infancia, Holly Martins, en este clásico thriller de Graham Greene, dirigido por Carol Reed. Martins busca a Lime en la caótica postguerra de Viena, y se encuentra así mismo metido en un entorno de amores, decepción y asesinatos.
Los Rose

- disfrute: ★★★★☆
- año: 2025
- género: comedia, drama, romance
- longitud: 1h45m
La vida parece fácil para la pareja perfecta que forman Ivy y Theo: carreras de éxito, un matrimonio feliz y unos hijos estupendos. Pero detrás de la fachada de su supuesta vida ideal, se avecina una tormenta: la carrera de Theo se desploma mientras que las ambiciones de Ivy despegan, lo que desencadena una caja de Pandora de competitividad y resentimiento ocultos.
Una historia verdadera

- disfrute: ★★★★☆
- año: 2025
- género: comedia, drama, romance
- longitud: 1h45m
Alvin Straight (Richard Farnsworth) es un achacoso anciano que vive en Iowa con una hija discapacitada (Sissy Spacek). Además de sufrir un enfisema y pérdida de visión, tiene graves problemas de cadera que casi le impiden permanecer de pie. Cuando recibe la noticia de que su hermano Lyle (Stanton), con el que está enemistado desde hace diez años, ha sufrido un infarto, a pesar de su precario estado de salud, decide ir a verlo a Wisconsin. Para ello tendrá que recorrer unos 500 kilometros, y lo hace en el único medio de transporte del que dispone: una máquina cortacésped.
El mago de Oz

- disfrute: ★★★★☆
- año: 1939
- género: aventura, fantasía
- longitud: 1h43m
La gran evasión

- disfrute: ★★★★☆
- año: 1963
- género: aventura, bélica, drama
- longitud: 2h43m
La quimera del oro

- disfrute: ★★★★☆
- año: 1925
- género: aventura, comedia, drama
- longitud: 1h35m
Aventuras de un solitario buscador de oro en Alaska, donde se topa con varios personajes rudos, y se enamora de la hermosa Georgia, a la que trata de conquistar.
Mujeres al borde de un ataque de nervios

- disfrute: ★★★★☆
- año: 1988
- género: comedia, drama
- longitud: 1h28m
Pepa e Iván son actores de doblaje. Él es un mujeriego empedernido y, después de una larga relación, rompe con Pepa: le deja un mensaje en el contestador pidiéndole que le prepare una maleta con sus cosas. Al quedarse sola, Pepa no soporta vivir en una casa llena de recuerdos y decide alquilarla. Mientras espera que Iván vaya a recoger la maleta, la casa se le va llenando de gente extravagante de la que aprenderá muchas cosas sobre la soledad y la locura.
Mejores podcasts de 2025
- Punzadas sonoras
- Quieto todo el mundo
- (de eso no se habla) temporada 2: Se llamaba como yo
- No es el fin del mundo
Paula Ducay e Inés García me han recordado la importancia de la Filosofía como nadie. Gracias a ellas he descubierto nuevas maneras de ver, me han regalado palabras que describen a la percepción pensamientos y sensaciones que tengo y gracias a ellas he conseguido esclarecer un poco más mi entendimiento del mundo y de mi mismo. En especial han sido la base para desenterrar pilares imprescindibles de mi vida como son el deambular creativo y la solitud.

Siento que mi camino con ellas ha sido perfecto, me enamoré de ellas en los derroteros de Carne cruda en 2023. Eran episodios de 20 minutos muy accesibles, y una vez que ya me enganché a ellas, los programas de una hora se hacen cortos.
Escucharlas es un absoluto placer.
Facu Díaz y Miguel Maldonado han conseguido alegrarme las horas de cocina de los lunes con la manera más amable de acercarme desde la comedia a la actualidad política. Hacen un combo maravilloso.

Con un tono divertido y relajado, en el programa van comentando desde la ignorancia, la improvisación y, en ocasiones, la desidia las noticias más importantes de la semana.
Esta misa roja ha creado una verdadera religión a lo largo de los programas, no sólo por el lore sino por el vacío existencial que dejan esa semana que no publican programa. Especialmente cuando es porque los nazis les ocupan los espacios, asco de pais...
(de eso no se habla) temporada 3: Se llamaba como yo
En «De eso no se habla» hablamos de los silencios que crea esa frase, tanto en nosotras como en la sociedad: de las historias que se esconden detrás de ellos, y de qué pasa cuando los rompemos.

Esta tercera temporada trata sobre la memoria de la niña Begoña Urroz, sobre las cinco décadas de silencio de una familia… Y sobre el ruido que lo rompió. Muestra cómo la guerra sucia del estado contra ETA
No es el fin del mundo
Una mirada muy interesante a la geopolítica mundial

El podcast semanal de El Orden Mundial (EOM) para entender qué pasa en el mundo. Análisis, contexto y matices sobre la realidad internacional. Porque estar al día de qué pasa más allá de nuestras fronteras no debería ser ni complicado ni aburrido.
Mejores videojuegos de 2025
Mi juego favorito de este año ha sido [Thronefall]{.spurious-link target="Thronefall"}
Thronefall

- disfrute: ★★★★★
- platforms: linux
- year: 2024
- genre: Strategy, tactical, puzzle, arcade
Al principio era un poco escéptico porque tienes unas localizaciones fijas para las torres, pero sigue siendo un juego difícil y entretenido. Además que los gráficos son preciosos.
A minimalist game about building and defending a little kingdom. Thronefall is a classic strategy game without unnecessary complexity, just some healthy hack-and-slay. Build up your base during the day, and defend it 'til your last breath at night.

- disfrute: ★★★★★
- year: 2025
- genre: puzzle
- platform: desktop, mobile
- price: free
Simple de jugar pero muy adictivo.
A minesweeper game that requires observation by mixing roguelike elements with the classic gameplay.
You Must Build A Boat

- disfrute: ★★★★★
- platforms: android, linux
- year: 2015
- genre: puzzle
You Must Build A Boat is the sequel to the award winning \"10000000\". Travel the world, run procedurally generated dungeons finding artifacts, capturing monsters and recruiting crew for your...
10000000 {#section-1}

- disfrute: ★★★★★
- platforms: android, linux
- year: 2012
- genre: puzzle
10000000 is an award winning hybrid RPG/Action/Puzzle game. Matching tiles controls your character enabling you to explore, fight and loot. When you are not facing monsters you will be back in your prison, constructing buildings and getting stronger for your next run.
Books⚑
-
2024 ha sido un año muy potente para mi en cuanto a lectura se refiere,
La mayor parte de los 23 libros que me he terminado han tenido buena puntuación.
5: 8 4: 7 3: 6 2: 1 1: 1
6 se han quedado por el camino y sólamente recuerdo uno que se me haya hecho pesado de leer y me he forzado a terminar de leerlo.
Política
The global police state by William I. Robinson
William ha puesto palabras bonitas y claras a mis pensamientos como no lo hacia un libro desde el manifiesto comunista hace muchos años. Un análisis impoluto sobre la crisis del capitalismo y el mundo al que nos estamos dirigiendo. A la vez que dando una dirección a la que apuntar para combatirlo. Ambiciosa y difícil, pero la que más me cuadra. Me encantaría debatirlo con la gente y lo regalaré allá donde vaya. Da un poco de yuyu porque hasta el final final final no da atisbo de luz al final del túnel tan necesaria en estos tiempos oscuros, pero aguantad, que merece la pena (✿◠‿◠).

A muchos nos aterra el nuevo auge del fascismo. Solo en Europa, la extrema derecha integra cinco gobiernos y tiene representación parlamentaria destacada en veintisiete países. Pero esto es apenas la punta del iceberg de un proceso bastante más complejo: el auge del Estado policial global como respuesta a la profunda crisis del sistema capitalista actual. A medida que el neoliberalismo dispara las desigualdades hasta límites insospechados (los veintiséis millonarios más importantes del mundo poseen hoy más de la mitad de la riqueza mundial mientras dos mil millones de personas viven en situación de pobreza), los individuos se vuelven «desechables». Una población excedente que supone una amenaza de rebelión para la clase capitalista. Para refrenarla, se hacen ubicuos todo tipo de sistemas de control, rastreos biométricos, encarcelamientos generalizados, barcos‐prisión, violencia policial, persecución de migrantes, represión contra activistas medioambientales, eliminación de prestaciones sociales, desahucios, precarización de las clases medias, guerras estratégicas sustentadas por capital privado... Así, el Estado policial global no remite ya a un mecanismo policial y militar, sino a la propia economía global como totalidad represiva, cuya lógica es tan mercantil como política y cultural. Y, mientras la codicia infinita de la clase dominante hunde al capitalismo en una crisis sin precedentes (llevando la degradación ecológica y el deterioro social a su límite absoluto), el neofascismo afianza su posición en ese Estado policial global cuyo objetivo es la exclusión coercitiva de la humanidad excedente. Basándose en datos estremecedores y argumentos incontrovertibles, William I. Robinson demuestra hasta qué punto el capitalismo del siglo XXI se ha convertido en un sistema absoluto de represión como único método para mantenerse en pie más allá de sus contradicciones terminales, y defiende la urgencia de crear un movimiento que trascienda los meros llamados a la justicia social y ataque a la yugular.
Joyful militancy by carla bergman and Nick Montgomery
Un libro muy interesante desde todas las perspectivas. Por cómo debió de ser el proceso creativo, por el cuidado y respeto a todos los distintos movimientos que representa, por las personas entrevistadas y las ideas que transmiten... Para mi ha sido un libro clave para una de las transformaciones más importantes de concepto de vida que he dado este año, entrar más en contacto con mi deseo y dejar que este fluya sobre las rigideces autoimpuestas entre otras cosas por el concepto del deber. Esto aplicado a mi vida en general y a mi militancia en particular.
Es cierto que la mayor parte de los conceptos transgresores son heredados del feminismo, pero el libro los refleja muy bien y puede ser un buen punto de entrada para los que no nos hemos zambullido aún muy profundamente en leer teoría feminista.

Why do radical movements and spaces sometimes feel laden with fear, anxiety, suspicion, self-righteousness and competition? The authors call this phenomenon rigid radicalism: congealed and toxic ways of relating that have seeped into radical movements, posing as the ‘correct’ way of being radical. In conversation with organizers and intellectuals from a wide variety of currents, the authors explore how rigid radicalism smuggles itself into radical spaces, and how it is being undone. Rather than proposing ready-made solutions, they amplify the questions that are already being asked among movements. Fusing together movement-based perspectives and contemporary affect theory, they trace emergent forms of trust, care and responsibility in a wide variety of radical currents today, including indigenous resurgence, anarchism, transformative justice, and youth liberation. Joyful Militancy foregrounds forms of life in the cracks of Empire, revealing the ways that fierceness, tenderness, curiosity, and commitment can be intertwined.
Interviewees include Silvia Federici, adrienne maree brown, Marina Sitrin, Gustavo Esteva, Tasnim Nathoo, Kian Cham, Leanne Betasamosake Simpson, Sebastian Touza, Walidah Imarisha, Margaret Killjoy, Glen Coulthard, Richard Day, Melanie Matining, Zainab Amadahy and Mik Turje.
Verano sin vacaciones. Las hijas de la Costa del Sol por Ana geranios
Libro que dolorosamente me quitó la venda de los ojos en cuanto al turismo y la restauración. Tiene un formato perfecto, la primera parte (Verano sin vacaciones) te llega a la patata haciéndote vivir en las entrañas lo podrido que está el sector y luego en la segunda (Las hijas de la Costa del Sol) le da forma de ensayo y te llega al coco.
Lo leímos en un club de lectura muy chulo organizado por la Escuela de las Periferias, que junto a Estuve aquí y me acordé de nosotros de Anna Pacheco, nos ayudó a tener unas discusiones super interesantes que terminaron de definir mi nuevo concepto sobre el turismo. Además tuve la suerte de asistir a una mesa redonda impresionante con Ana, Valeria del Sindicato de Inquilinas y dos compas de la PAH que le dieron distintos matices a la problemática de la vivienda que tenemos que sufrir. Y para colmo luego estuvimos rajando en un parque con Ana y luego dimos un paseo por el barrio. Un final maravilloso para un libro fantástico.

¿Cómo sería un mundo sin hostelería? ¿Es posible pensar en una sociedad en la que ninguna persona tuviera que servir ni ser servida, donde las bandejas no tuvieran ninguna utilidad?
Este libro no va de eso. Es justo lo contrario: el análisis de un sector económico que se enriquece gracias al trabajo de quienes se dedican a servir a un público que puede permitírselo.
Verano sin vacaciones es el diario de una trabajadora del sector hostelero de la costa malagueña; un relato al que se suma Las hijas de la Costa del Sol, un ensayo situado que nos interpela como turistas, pero también nos hace comprender qué hay detrás de una industria que descansa sobre la explotación laboral, el servilismo político y la voracidad ecológica.
El leitmotiv es hacernos preguntas, dialogar, pensar, compartir; imaginarnos, ahora sí, cómo sería un mundo sin hostelería.
Thinking in systems by Donella H. Meadows
Me ha encantado. Al principio dudaba de la autora por no saber de que pie cojeaba pero al final es un libro que abre mentes. Muchas ganas de resumirlo. Al principio del año descubrí el concepto de systems thinking a way of making sense of the complexity of the world by looking at it in terms of wholes and relationships rather than by splitting it down into its parts. It has been used as a way of exploring and developing effective action in complex contexts, enabling systems change. Systems thinking draws on and contributes to systems theory and the system sciences.
Me maravilló la idea de tener un sistema nuevo y sistemático de analizar el mundo, me encantan los modelos y creo que systems thinking puede llegar a ser muy potente. De los cuatro libros que me leí del tema este es sin duda el mejor.
This is a primer that brings you to a tangible world where anyone can understand systems and engage with them in meaningful ways. The problems we face – war, hunger, poverty, climate change, racism, gender-based violence cannot be solved by quick fixes in isolation. We need to see the whole system and reach deeper to the structures and mindsets that are at play. Written with a hopeful and visionary tone, Thinking in Systems helps readers overcome confusion and helplessness, which is a first step in the work of change.
Novela
Mejor la ausencia de Edurne Portela
Lo empecé el 6 de marzo y esa noche me leí 209 páginas. A las 5:35 dije que ya era suficiente, aunque me fuese rabia no terminarmelo en un día. Hoy ha caído en la madrugada del 7 al 8 de marzo. Muy icónico todo, ya no me da rabia haberlo acabado hoy. Me ha maravillado, cómo escribe Edurne, es una pasada. Te coje con la primera frase y es tu cuerpo el que suplica que dejes de leer. No hay piedad. El cambio de lenguaje a medida que va avanzando la vida de Amaia es alucinante . Cómo le da un repaso a todo el conflicto de Euskadi visto desde alguien que sin estar dentro está salpicada. No se si tuvo miedo al publicarlo, es bastante crítica con toda la movida. Me sorprende porque ella si que está politizada. He pensado que me gustaría preguntarle su opinión respecto a lo que pasó y como ella lo vivió. He pensado varias veces a lo largo de la novela si es autobiográfica. Es impresionante que sea su primera novela.
Ha sido un año del (no estoy seguro de si bien llamado) conflicto vasco, ya que también leí Las fieras de Clara Usón que también me gustó mucho.

Crecer siempre implica alguna forma de violencia, contra uno mismo o contra aquellos que quieren imponer su autoridad. Cuando además la vida trascurre en un pueblo de la margen izquierda del Nervión durante los años 80 y 90, y todo es heroína, paro, detritus medioambiental, cuando en las calles silban cada semana las pelotas de goma y los gases lacrimógenos y las paredes están llenas de consignas asesinas, la violencia no es sólo un problema personal. Mejor la ausencia nos presenta una familia destruida, atravesada por la violencia de su entorno. Amaia, la pequeña de cuatro hermanos, narra ese entorno brutal desde su mirada de niña y adolescente. Compartimos con ella su miedo, su perplejidad, su rabia, ante un padre que hiere, una madre que se esconde, tres hermanos que, como ella, sólo buscan salir adelante.
Amaia es la joven que se enfrenta, hasta alcanzar sus propios límites, a este mundo hostil. Amaia es también la mujer que años después vuelve a su pueblo para encontrarse con un pasado irresuelto. En ese camino de ida y vuelta, en sus huidas y regresos, descubrirá, a su pesar, que nadie escapa del entorno en el que se cría, de la familia que le toca en suerte. Y que reconocerlo es la única manera de sobrevivir.
To kill a mockingbird de Harper Lee
Inglés muy dificil. La historia muy bien contada, es como que todo el pueblo es de la familia. Engancha bastante.
One of the best-loved stories of all time, To Kill a Mockingbird has been translated into more than 40 languages, sold more than 30 million copies worldwide, served as the basis for an enormously popular motion picture, and voted one of the best novels of the 20th century by librarians across the United States. A gripping, heart-wrenching, and wholly remarkable tale of coming-of-age in a South poisoned by virulent prejudice, it views a world of great beauty and savage inequities through the eyes of a young girl, as her father -- a crusading local lawyer -- risks everything to defend a black man unjustly accused of a terrible crime.
Lawyer Atticus Finch defends Tom Robinson -- a black man charged with the rape of a white girl. Writing through the young eyes of Finch's children Scout and Jem, Harper Lee explores with rich humor and unswerving honesty the irrationality of adult attitudes toward race and class in small-town Alabama during the mid-1930s Depression years. The conscience of a town steeped in prejudice, violence, and hypocrisy is pricked by the stamina and quiet heroism of one man's struggle for justice. But the weight of history will only tolerate so much.
Gestión del tiempo
Four thousand weeks by Oliver Burkeman
Me ha encantado. Me flipa encontrar un libro de auto ayuda y time management con una perspectiva bastante anticapitalista. Probablemente es el mejor libro de gestión de tiempo que conozco, ha generado en mi esos momentos preciosos en los que surgen ideas fuera de los límites mentales que tenía antes. Ha sido bastante liberador y ha influido mucho en crear mi nueva manera de entender el tiempo y cómo navegarlo. Lo he utilizado mucho este año para rediseñar todos mis roadmap adjustments, en especial el trimestral y el anual. Muy muy recomendable.

The average human lifespan is absurdly, outrageously, insultingly brief: if you live to 80, you have about four thousand weeks on earth. How should we use them best?
Of course, nobody needs telling that there isn't enough time. We're obsessed by our lengthening to-do lists, our overfilled inboxes, the struggle against distraction, and the sense that our attention spans are shrivelling. Yet we rarely make the conscious connection that these problems only trouble us in the first place thanks to the ultimate time-management problem: the challenge of how best to use our four thousand weeks.
Four Thousand Weeks is an uplifting, engrossing and deeply realistic exploration of this problem. Rejecting the futile modern obsession with 'getting everything done,' it introduces readers to tools for constructing a meaningful life, showing how the unhelpful ways we've come to think about time aren't inescapable, unchanging truths, but choices we've made, as individuals and as a society - and its many revelations will transform the reader's worldview.
Drawing on the insights of both ancient and contemporary philosophers, psychologists, and spiritual teachers, Oliver Burkeman sets out to realign our relationship with time - and in doing so, to liberate us from its grasp.
Essentialism: The disciplined pursuit of less
Me apesta su tono emprendedor de sueño americano y su falta de perspectiva de clase. Eso sumado a su prepotencia mandaloriana de "this is the way"" con el essentialist path y non essentialist path hace que la lectura sea bastante horrenda. Dicho esto, si consigues abstraerte de toda esa mierda, el author aporta conceptos interesantes que me han ayudado mucho a diseñar las revisiones trimestrales y anuales del roadmap adjustments. Es especialmente interesante para aquellas personas que no sabemos decir que no y acabamos enfrascados en mil movidas.

Essentialism isn't about getting more done in less time. It's about getting only the right things done. Have you ever found yourself stretched too thin? Do you simultaneously feel overworked and underutilized? Are you often busy but not productive? Do you feel like your time is constantly being hijacked by other people's agendas? If you answered yes to any of these, the way out is the Way of the Essentialist. Essentialism is more than a time-management strategy or a productivity technique. It is a systematic discipline for discerning what is absolutely essential, then eliminating everything that is not, so we can make the highest possible contribution toward the things that really matter. By forcing us to apply more selective criteria for what is Essential, the disciplined pursuit of less empowers us to reclaim control of our own choices about where to spend our precious time and energy -- instead of giving others the implicit permission to choose for us. Essentialism is not one more thing. It's a whole new way of doing everything. It's about doing less, but better, in every area of our lives.
Ensayo
Me ha encantado cómo trata temas tan complicados, su filosofía de vida y lo bien que está escrito. Una ventana a todo tipo de "amor" desde la "nueva" perspectiva feminista. Me entraron muchas ganas de leerlo ya que no paraban de mencionarlo en Punzadas sonoras, me parece que Paula e Inés me van a dar muy buen material de lectura, aunque no pueda seguirlas el ritmo ni de lejos xD.

Una mirada franca y divertida sobre el amor, la intimidad y la identidad en el siglo XXI.
Diez días después de cancelar su boda, CJ Hauser se embarcó en una expedición a Texas para estudiar a la grulla trompetera. Tras una semana chapoteando en las marismas del golfo comprendió que había estado a punto de firmar un contrato para vivir la vida de otra persona.
¿Qué pasaría si decidiéramos liberarnos de la idea tradicional de felicidad y nos abriéramos a lo inesperado? Hauser se sirve de su propia experiencia para explorar las relaciones sentimentales, los fracasos amorosos, la intimidad y la identidad en el siglo XXI. Disecciona la personalidad de los protagonistas de Expediente X mientras intenta entender qué es el amor, rememora sus peores citas de Tinder, chatea con desconocidos que conversan como robots y analiza a Katharine Hepburn en Historias de Filadelfia para aprender a no perderse en una relación.
Divertido, inclasificable y brutalmente sincero, este libro trata de cómo modelamos nuestra vida sentimental y nuestra comprensión de los demás a través de los relatos; una lectura para aquellos que aprenden a encontrar la alegría en el no saber e intentan, aunque a veces fracasen, construir particulares formas de vida, familia y hogar.
Las dos amigas de Tony Morrison
Un libro breve, inquietante y curioso que te hace cuestionarte a ti mismo. Me gustaría dar una reflexión más larga pero es imposible no meter spoilers. Cuando te lo leas lo hablamos :P
Podcasts⚑
-
New: Mejores podcasts de 2024.
Estoy muy contento porque este año he descubierto podcast muy buenos, hasta el punto de considerar unidireccionalmente a varias de ellas ya parte de mi familia.
Facu Díaz y Miguel Maldonado han conseguido alegrarme las horas de cocina de los lunes con la manera más amable de acercarme desde la comedia a la actualidad política.

Con un tono divertido y relajado, en el programa van comentando desde la ignorancia, la improvisación y, en ocasiones, la desidia las noticias más importantes de la semana.
Esta misa roja ha creado una verdadera religión a lo largo de los programas, no sólo por el lore sino por el vacío existencial que dejan esa semana que no publican programa. Especialmente cuando es porque los nazis les ocupan los espacios, asco de pais...
Paula Ducay e Inés García me han recordado la importancia de la Filosofía como nadie. Gracias a ellas he descubierto nuevas maneras de ver, me han regado palabras que describen a la percepción pensamientos y sensaciones que tengo y gracias con ellas he conseguido esclarecer un poco más mi entendimiento del mundo y de mi mismo. En especial han sido la base para desenterrar pilares imprescindibles de mi vida como son el deambular creativo y la solitud.

Siento que mi camino con ellas ha sido perfecto, me enamoré de ellas en los derroteros de Carne cruda de la temporada pasada (una pena enorme que no las hayan renovado). Son episodios de 20 minutos muy accesibles, y una vez que ya me enganché a ellas, los programas de una hora se hacen cortos.
Escucharlas es un absoluto placer.
Asaari Bibang, Lamine Thior y Frank T nos regalan un par de veces al mes una ventana a la realidad de ser una persona afrodescendiente en España. Es un podcast muy amable de escuchar, te partes con las puyas que se echan, además de ser super interesante ver el salto generacional que hay entre Frank y (Asaari y Lamine). Eso si, tienes que aprender a vivir con que se interrumpan continuamente. Al principio esto me rayó bastante ya que es especialmente irritante cuando se lo hacen a Asaari. Luego ves que no tienen filtro y no sólo se cortan entre ellas, también lo hacen con las personas invitadas.

Una manera muy fácil de introducir el antiracismo en tu vida ya que con sus relatos eres capaz de verlo, identificártelo, aprender y corregir elementos racistas que tenemos inculcados por este sistema asqueroso. Y creo que como material divulgativo es perfecto ya que lo abordan desde un punto de vista antipunitivista nada agresivo, sin que por ello se dejen títere sin cabeza.
Violeta Muñoz y Javier Gallego han sido la voz seria de la actualidad para mi este año.

Si es cierto que esta temporada estoy un poco más desencantado con el equipo ya que:
- Ya no cuentan con Punzadas Sonoras para los derroteros y a Santiago Alba Rico nunca le he tragado.
- Hace tiempo que no llaman a Pablo Elorduy del Salto, me encantaba oírle y le echo de menos :(.
- Han cambiado el Nido de rojos de la temporada pasada por "A diestra y siniestra" en el que incluyen a peña de derechas en los debates para darle otro punto de vista. Ya se van regulando pero los primeros programas tuve que dejar de escucharlo con cabreo porque tenía que escuchar cosas como que el genocidio de palestina estaba justificado y lindezas similares. Entiendo que puede darle color al debate, pero para eso ya me voy a los medios tradicionales la verdad...
Aun así sigue siendo el referente para mi para saber qué está pasando tanto en el mundo en general como en el mundo activista.
Muriendo porque saquen más
Este año hay unos cuantos podcast que no han continuado y que me encantarían que volviesen como:
Videogames⚑
DragonSweeper⚑
-
New: Introduce dragonsweeper.
DragonSweeper is an addictive simple RPG-tinged take on the Minesweeper formula. You can play it for free in your browser.
If you're lost at the beginning start reading the ArsTechnica blog post.
Tips
- Use
Shiftto mark numbers you already know.
References
- Use
Age of Empires⚑
-
New: New teutons vs portuguese video.
Board Games⚑
Monologues⚑
-
New: Add Sammy Obeid.
- Sammy Obeid: Antizionist comedian