May of 2024
Life Management⚑
Time management⚑
-
New: Time is not a resource to be tamed.
You'll see everywhere the concept of
time management
. I feel it's daring to suggest that you have the power to actually manage time. No you can't as much as you can't tame the sea. Time is not a resource to be spent or be managed, the best we can do is to try to understand its flows and navigate it the best we can.
Life Chores Management⚑
beancount⚑
-
New: Introduce beanSQL.
-
New: Get the quarter of a date.
Use the
quarter(date)
selector in theSELECT
. For example:SELECT quarter(date) as quarter, sum(position) AS value WHERE account ~ '^Expenses:' OR account ~ '^Income:' GROUP BY quarter
It will return the quarter in the format
YYYY-QX
. -
New: Building your own dashboards.
I was wondering whether to create fava dashboards or to create them directly in grafana.
Pros of fava dashboards: - They are integrated in fava so it would be easy to browse other beancount data. Although this could be done as well in another window if I used grafana. - There is no need to create the beancount grafana data source logic. - It's already a working project, I would need just to tweak an existent example.
Cons: - I may need to learn echarts and write JavaScript to tweak some of the dashboards. - I wouldn't have all my dashboards in the same place. - It only solves part of the problem, I'd still need to write the bean-sql queries. But using beanql is probably the best way to extract data from beancount anyway. - It involves more magic than using grafana. - grafana dashboards are prettier. - I wouldn't use the grafana knowledge. - I'd learn a new tool only to use it here instead of taking the chance to improve my grafana skillset.
I'm going to try with fava dashboards and see how it goes
Fava Dashboards⚑
-
New: Introduce Fava Dashboards.
Installation
pip install git+https://github.com/andreasgerstmayr/fava-dashboards.git
Enable this plugin in Fava by adding the following lines to your ledger:
2010-01-01 custom "fava-extension" "fava_dashboards"
Then you'll need to create a
dashboards.yaml
file where your ledger lives.The plugin looks by default for a
dashboards.yaml
file in the directory of the Beancount ledger (e.g. if you runfava personal.beancount
, thedashboards.yaml
file should be in the same directory aspersonal.beancount
).The configuration file can contain multiple dashboards, and a dashboard contains one or more panels. A panel has a relative width (e.g.
50%
for 2 columns, or33.3%
for 3 column layouts) and a absolute height.The
queries
field contains one or multiple queries. The Beancount query must be stored in thebql
field of the respective query. It can contain Jinja template syntax to access thepanel
andledger
variables described below (example: use{{ledger.ccy}}
to access the first configured operating currency). The query results can be accessed viapanel.queries[i].result
, wherei
is the index of the query in thequeries
field.Note: Additionally to the Beancount query, Fava's filter bar further filters the available entries of the ledger.
Common code for utility functions can be defined in the dashboards configuration file, either inline in
utils.inline
or in an external file defined inutils.path
.Start your configuration
It's best to tweak the example than to start from scratch. Get the example by:
cd $(mktemp -d) git clone https://github.com/andreasgerstmayr/fava-dashboards cd fava-dashboards/example fava example.beancount
Configuration reference
HTML, echarts and d3-sankey panels: The
script
field must contain valid JavaScript code. It must return a valid configuration depending on the paneltype
. The following variables and functions are available: *ext
: the FavaExtensionContext
*ext.api.get("query", {bql: "SELECT ..."}
: executes the specified BQL query *panel
: the current (augmented) panel definition. The results of the BQL queries can be accessed withpanel.queries[i].result
. *ledger.dateFirst
: first date in the current date filter *ledger.dateLast
: last date in the current date filter *ledger.operatingCurrencies
: configured operating currencies of the ledger *ledger.ccy
: shortcut for the first configured operating currency of the ledger *ledger.accounts
: declared accounts of the ledger *ledger.commodities
: declared commodities of the ledger *helpers.urlFor(url)
: add current Fava filter parameters to url *utils
: the return value of theutils
code of the dashboard configurationJinja2 panels: The
template
field must contain valid Jinja2 template code. The following variables are available: *panel
: see above *ledger
: see above *favaledger
: a reference to theFavaLedger
objectCommon Panel Properties *
title
: title of the panel. Default: unset *width
: width of the panel. Default: 100% *height
: height of the panel. Default: 400px *link
: optional link target of the panel header. *queries
: a list of dicts with abql
attribute. *type
: panel type. Must be one ofhtml
,echarts
,d3_sankey
orjinja2
.HTML panel The
script
code of HTML panels must return valid HTML. The HTML code will be rendered in the panel.ECharts panel The
script
code of Apache ECharts panels must return valid Apache ECharts chart options. Please take a look at the ECharts examples to get familiar with the available chart types and options.d3-sankey panel The
script
code of d3-sankey panels must return valid d3-sankey chart options.Jinja2 panel The
template
field of Jinja2 panels must contain valid Jinja2 template code. The rendered template will be shown in the panel.Debugging
Add
console.log
strings in the javascript code to debug it.References
Examples
-
Vertical bars with one serie using year
- title: Net Year Profit 💰 width: 50% link: /beancount/income_statement/ queries: - bql: | SELECT year, sum(position) AS value WHERE account ~ '^Expenses:' OR account ~ '^Income:' GROUP BY year link: /beancount/balance_sheet/?time={time} type: echarts script: | const currencyFormatter = utils.currencyFormatter(ledger.ccy); const years = utils.iterateYears(ledger.dateFirst, ledger.dateLast) const amounts = {}; // the beancount query only returns periods where there was at least one matching transaction, therefore we group by period for (let row of panel.queries[0].result) { amounts[`${row.year}`] = -row.value[ledger.ccy]; } return { tooltip: { trigger: "axis", valueFormatter: currencyFormatter, }, xAxis: { data: years, }, yAxis: { axisLabel: { formatter: currencyFormatter, }, }, series: [ { type: "bar", data: years.map((year) => amounts[year]), color: utils.green, }, ], };
Vertical bars using one serie using quarters
- title: Net Quarter Profit 💰 width: 50% link: /beancount/income_statement/ queries: - bql: | SELECT quarter(date) as quarter, sum(position) AS value WHERE account ~ '^Expenses:' OR account ~ '^Income:' GROUP BY quarter link: /beancount/balance_sheet/?time={time} type: echarts script: | const currencyFormatter = utils.currencyFormatter(ledger.ccy); const quarters = utils.iterateQuarters(ledger.dateFirst, ledger.dateLast).map((q) => `${q.year}-${q.quarter}`); const amounts = {}; // the beancount query only returns periods where there was at least one matching transaction, therefore we group by period for (let row of panel.queries[0].result) { amounts[`${row.quarter}`] = -row.value[ledger.ccy]; } return { tooltip: { trigger: "axis", valueFormatter: currencyFormatter, }, xAxis: { data: quarters, }, yAxis: { axisLabel: { formatter: currencyFormatter, }, }, series: [ { type: "bar", data: quarters.map((quarter) => amounts[quarter]), }, ], };
Vertical bars showing the evolution of one query over the months
- title: Net Year Profit Distribution 💰 width: 50% link: /beancount/income_statement/ queries: - bql: | SELECT year, month, sum(position) AS value WHERE account ~ '^Expenses:' OR account ~ '^Income:' GROUP BY year, month link: /beancount/balance_sheet/?time={time} type: echarts script: | const currencyFormatter = utils.currencyFormatter(ledger.ccy); const years = utils.iterateYears(ledger.dateFirst, ledger.dateLast); const amounts = {}; for (let row of panel.queries[0].result) { if (!amounts[row.year]) { amounts[row.year] = {}; } amounts[row.year][row.month] = -row.value[ledger.ccy]; } return { tooltip: { valueFormatter: currencyFormatter, }, legend: { top: "bottom", }, xAxis: { data: ['0','1','2','3','4','5','6','7','8','9','10','11','12'], }, yAxis: { axisLabel: { formatter: currencyFormatter, }, }, series: years.map((year) => ({ type: "bar", name: year, data: Object.values(amounts[year]), label: { show: false, formatter: (params) => currencyFormatter(params.value), }, })), };
Coding⚑
Languages⚑
Logql⚑
-
New: Compare the values of a metric with the past.
The offset modifier allows changing the time offset for individual range vectors in a query.
For example, the following expression counts all the logs within the last ten minutes to five minutes rather than last five minutes for the MySQL job. Note that the offset modifier always needs to follow the range vector selector immediately.
count_over_time({job="mysql"}[5m] offset 5m) // GOOD count_over_time({job="mysql"}[5m]) offset 5m // INVALID
Python Snippets⚑
-
New: Use Path of pathlib write_text in append mode.
It's not supported you need to
open
it:with my_path.open("a") as f: f.write("...")
-
New: Suppress ANN401 for dynamically typed *args and **kwargs.
Use
object
instead:def function(*args: object, **kwargs: object) -> None:
-
To write an if-then-else statement in Python so that it fits on one line you can use:
fruit = 'Apple' isApple = True if fruit == 'Apple' else False
-
New: Get package data relative path.
If you want to reference files from the foo/package1/resources folder you would want to use the file variable of the module. Inside foo/package1/init.py:
from os import path resources_dir = path.join(path.dirname(__file__), 'resources')
psycopg2⚑
-
New: Introduce psycopg2.
Installation
Install the dependencies:
sudo apt install libpq-dev python3-dev
Then install the package
pip install psycopg2
DevOps⚑
Storage⚑
OpenZFS⚑
-
New: Set zfs module parameters or options.
Most of the ZFS kernel module parameters are accessible in the SysFS
/sys/module/zfs/parameters
directory. Current values can be observed bycat /sys/module/zfs/parameters/PARAMETER
Many of these can be changed by writing new values. These are denoted by Change|Dynamic in the PARAMETER details below.
echo NEWVALUE >> /sys/module/zfs/parameters/PARAMETER
If the parameter is not dynamically adjustable, an error can occur and the value will not be set. It can be helpful to check the permissions for the
PARAMETER
file in SysFS.In some cases, the parameter must be set prior to loading the kernel modules or it is desired to have the parameters set automatically at boot time. For many distros, this can be accomplished by creating a file named
/etc/modprobe.d/zfs.conf
containing a text line for each module parameter using the format:options zfs PARAMETER=VALUE
Some parameters related to ZFS operations are located in module parameters other than in the zfs kernel module. These are documented in the individual parameter description. Unless otherwise noted, the tunable applies to the zfs kernel module. For example, the
icp
kernel module parameters are visible in the/sys/module/icp/parameters
directory and can be set by default at boot time by changing the/etc/modprobe.d/icp.conf
file. -
New: Configure the deadman failsafe measure.
ZFS has a safety measure called the zfs_deadman_failmode. When a pool sync operation takes longer than
zfs_deadman_synctime_ms
, or when an individual I/O operation takes longer thanzfs_deadman_ziotime_ms
, then the operation is considered to be "hung". Ifzfs_deadman_enabled
is set, then the deadman behavior is invoked as described byzfs_deadman_failmode
. By default, the deadman is enabled and set to wait which results in "hung" I/O operations only being logged. The deadman is automatically disabled when a pool gets suspended.zfs_deadman_failmode
configuration can have the next values:wait
: Wait for a "hung" operation to complete. For each "hung" operation a "deadman" event will be posted describing that operation.continue
: Attempt to recover from a "hung" operation by re-dispatching it to the I/O pipeline if possible.panic
: Panic the system. This can be used to facilitate automatic fail-over to a properly configured fail-over partner.
Follow the guides under Set zfs module parameters or options to change this value.
-
You can see the ZFS events using
zpool events -v
. If you want to be alerted on these events you can use this service to ingest them into Loki and raise alerts.
Monitoring⚑
Loki⚑
-
New: Maximum of series reached for a single query.
Go to the loki-local-config.yaml, then find the limits_config configuration. Then modify this to the limits_config:
limits_config: max_query_series: 100000
But probably you're doing something wrong. feat(orgzly#Both local and remote notebook have been modified): Both local and remote notebook have been modified
You can force load or force save a single note with a long tap.
AlertManager⚑
-
New: Using time intervals.
The values of
time_intervals
can be:- times: [ - <time_range> ...] weekdays: [ - <weekday_range> ...] days_of_month: [ - <days_of_month_range> ...] months: [ - <month_range> ...] years: [ - <year_range> ...] location: <string>
All fields are lists. Within each non-empty list, at least one element must be satisfied to match the field. If a field is left unspecified, any value will match the field. For an instant of time to match a complete time interval, all fields must match. Some fields support ranges and negative indices, and are detailed below. If a time zone is not specified, then the times are taken to be in UTC.
-
time_range
: Ranges inclusive of the starting time and exclusive of the end time to make it easy to represent times that start/end on hour boundaries. For example,start_time: '17:00'
andend_time: '24:00'
will begin at 17:00 and finish immediately before 24:00. They are specified like so:times: - start_time: HH:MM end_time: HH:MM
-
weekday_range
: A list of days of the week, where the week begins on Sunday and ends on Saturday. Days should be specified by name (e.g. 'Sunday'). For convenience, ranges are also accepted of the form<start_day>:<end_day>
and are inclusive on both ends. For example:['monday:wednesday','saturday', 'sunday']
-
days_of_month_range
: A list of numerical days in the month. Days begin at1
. Negative values are also accepted which begin at the end of the month, e.g.-1
during January would represent January 31. For example:['1:5', '-3:-1']
. Extending past the start or end of the month will cause it to be clamped. E.g. specifying['1:31']
during February will clamp the actual end date to 28 or 29 depending on leap years. Inclusive on both ends. -
month_range
: A list of calendar months identified by a case-insensitive name (e.g. 'January') or by number, whereJanuary = 1
. Ranges are also accepted. For example,['1:3', 'may:august', 'december']
. Inclusive on both ends. -
year_range
: A numerical list of years. Ranges are accepted. For example,['2020:2022', '2030']
. Inclusive on both ends. -
location
: A string that matches a location in the IANA time zone database. For example,'Australia/Sydney'
. The location provides the time zone for the time interval. For example, a time interval with a location of'Australia/Sydney'
that contained something like:
times: - start_time: 09:00 end_time: 17:00 weekdays: ['monday:friday']
Would include any time that fell between the hours of 9:00AM and 5:00PM, between Monday and Friday, using the local time in Sydney, Australia. You may also use
'Local'
as a location to use the local time of the machine where Alertmanager is running, or'UTC'
for UTC time. If no timezone is provided, the time interval is taken to be in UTC. -
Databases⚑
PostgreSQL⚑
-
New: Store expensive calculation values in a postgresql database.
First you need to think if you actually need to store the calculations or you can do them on the fly with views. If views are too slow you can either use materialized views or triggers over calculation tables.
Materialized views are simpler to maintain but have some disadvantages such as outdated data or unneeded processing of data. If you need totally current information or if you don't want to periodically do the calculations on all the rows then triggers are probably the better solution.
-
New: Drop all tables of a database.
drop schema public cascade; create schema public;
-
New: Views.
A view is a named query stored in the PostgreSQL database server. A view is defined based on one or more tables which are known as base tables, and the query that defines the view is referred to as a defining query.
After creating a view, you can query data from it as you would from a regular table. Behind the scenes, PostgreSQL will rewrite the query against the view and its defining query, executing it to retrieve data from the base tables.
Views do not store data except the materialized views. In PostgreSQL, you can create special views called materialized views that store data physically and periodically refresh it from the base tables.
Simple views can be updatable.
Advantages of views - Simplifying complex queries: Views help simplify complex queries. Instead of dealing with joins, aggregations, or filtering conditions, you can query from views as if they were regular tables.
Typically, first, you create views based on complex queries and store them in the database. Then, you can use simple queries based on views instead of using complex queries.
-
Logical data independence: If your applications use views, you can freely modify the structure of the base tables. In other words, views enable you to create a layer of abstraction over the underlying tables.
-
Security and access control: Views enable fine-grained control over data access. You can create views that expose subsets of data in the base tables, hiding sensitive information.
This is particularly useful when you have applications that require access to distinct portions of the data.
Creating a view In PostgreSQL, a view is a named query stored in the database server. To create a new view, you can use the
CREATE VIEW
statement.CREATE VIEW view_name AS query;
In this syntax:
- Specify the name of the view after the
CREATE VIEW
keywords. - Specify a
SELECT
statement (query) that defines the view. The query is often referred to as the defining query of the view.
Creating a view examples
We’ll use the customer table from the sample database:
Basic CREATE VIEW statement example
The following example uses the CREATE VIEW statement to create a view based on the customer table:
CREATE VIEW contact AS SELECT first_name, last_name, email FROM customer;
Output:
CREATE VIEW
The following query data from the contact view:
SELECT * FROM contact;
Output:
first_name | last_name | email -------------+--------------+------------------------------------------ Jared | Ely | jared.ely@sakilacustomer.org Mary | Smith | mary.smith@sakilacustomer.org Patricia | Johnson | patricia.johnson@sakilacustomer.org ...
Using the CREATE VIEW statement to create a view based on a complex query
The following example creates a view based on the tables customer, address, city, and country:
CREATE VIEW customer_info AS SELECT first_name, last_name, email, phone, city, postal_code, country FROM customer INNER JOIN address USING (address_id) INNER JOIN city USING (city_id) INNER JOIN country USING (country_id);
The following query retrieves data from the
customer_info
view:SELECT * FROM customer_info;
Output:
first_name | last_name | email | phone | city | postal_code | country -------------+--------------+------------------------------------------+--------------+----------------------------+-------------+--------------------------------------- Jared | Ely | jared.ely@sakilacustomer.org | 35533115997 | Purwakarta | 25972 | Indonesia Mary | Smith | mary.smith@sakilacustomer.org | 28303384290 | Sasebo | 35200 | Japan Patricia | Johnson | patricia.johnson@sakilacustomer.org | 838635286649 | San Bernardino | 17886 | United States ...
Creating a view based on another view
The following statement creates a view called
customer_usa
based on thecustomer_info
view. Thecustomer_usa
returns the customers who are in the United States:CREATE VIEW customer_usa AS SELECT * FROM customer_info WHERE country = 'United States';
Here’s the query that retrieves data from the customer_usa view:
SELECT * FROM customer_usa;
Output:
first_name | last_name | email | phone | city | postal_code | country ------------+------------+--------------------------------------+--------------+-------------------------+-------------+--------------- Zachary | Hite | zachary.hite@sakilacustomer.org | 191958435142 | Akron | 88749 | United States Richard | Mccrary | richard.mccrary@sakilacustomer.org | 262088367001 | Arlington | 42141 | United States Diana | Alexander | diana.alexander@sakilacustomer.org | 6171054059 | Augusta-Richmond County | 30695 | United States ...
Replacing a view Note: for simple changes check alter views
To change the defining query of a view, you use the
CREATE OR REPLACE VIEW
statement:CREATE OR REPLACE VIEW view_name AS query;
In this syntax, you add the
OR REPLACE
between theCREATE
andVIEW
keywords. If the view already exists, the statement replaces the existing view; otherwise, it creates a new view.For example, the following statement changes the defining query of the contact view to include the phone information from the address table:
CREATE OR REPLACE VIEW contact AS SELECT first_name, last_name, email, phone FROM customer INNER JOIN address USING (address_id);
Display a view on psql
To display a view on
psql
, you follow these steps:First, open the Command Prompt on Windows or Terminal on Unix-like systems and connect to the PostgreSQL server:
psql -U postgres
Second, change the current database to
dvdrental
:\c dvdrental
Third, display the view information using the
\d+ view_name
command. For example, the following shows the contact view:\d+ contact
Output:
View "public.contact" Column | Type | Collation | Nullable | Default | Storage | Description ------------+-----------------------+-----------+----------+---------+----------+------------- first_name | character varying(45) | | | | extended | last_name | character varying(45) | | | | extended | email | character varying(50) | | | | extended | phone | character varying(20) | | | | extended | View definition: SELECT customer.first_name, customer.last_name, customer.email, address.phone FROM customer JOIN address USING (address_id);
-
-
New: Materialized Views.
PostgreSQL extends the view concept to the next level which allows views to store data physically. These views are called materialized views.
Materialized views cache the result set of an expensive query and allow you to refresh data periodically.
The materialized views can be useful in many cases that require fast data access. Therefore, you often find them in data warehouses and business intelligence applications.
Benefits of materialized views
- Improve query efficiency: If a query takes a long time to run, it could be because there are a lot of transformations being done to the data: subqueries, functions, and joins, for example.
A materialized view can combine all of that into a single result set that’s stored like a table.
This means that any user or application that needs to get this data can just query the materialized view itself, as though all of the data is in the one table, rather than running the expensive query that uses joins, functions, or subqueries.
Calculations can also be added to materialized views for any fields you may need, which can save time, and are often not stored in the database.
- Simplify a query: Like a regular view, a materialized view can also be used to simplify a query. If a query is using a lot of logic such as joins and functions, using a materialized view can help remove some of that logic and place it into the materialized view.
Disadvantages of a Materialized View - Updates to data need to be set up: The main disadvantage to using materialized views is that the data needs to be refreshed.
The data that’s used to populate the materialized view is stored in the database tables. These tables can have their data updated, inserted, or deleted. When that happens, the data in the materialized view needs to be updated.
This can be done manually, but it should be done automatically.
- Incremental updates are not supported: So the whole view is generated on each refresh.
- Data may be inconsistent: Because the data is stored separately in the materialized view, the data in the materialized view may be inconsistent with the data in the underlying tables.
This may be an issue if you are expecting or relying on data to be consistent.
However, for scenarios where it doesn’t matter (e.g. monthly reporting on months in the past), then it may be OK.
- Storage Requirements: Materialized Views can consume significant storage space, depending on the size of your dataset. This consideration is crucial, especially in resource-limited environments.
Creating materialized views
To create a materialized view, you use the CREATE MATERIALIZED VIEW statement as follows:
CREATE MATERIALIZED VIEW [IF NOT EXISTS] view_name AS query WITH [NO] DATA;
How it works.
- First, specify the
view_name
after theCREATE MATERIALIZED VIEW
clause - Second, add the
query
that retrieves data from the underlying tables after theAS
keyword. - Third, if you want to load data into the materialized view at the creation time, use the
WITH DATA
option; otherwise, you useWITH NO DATA
option. If you use theWITH NO DATA
option, the view is flagged as unreadable. It means that you cannot query data from the view until you load data into it. - Finally, use the
IF NOT EXISTS
option to conditionally create a view only if it does not exist.
Refreshing data for materialized views
Postgresql will never refresh the data by it's own, you need to define the processes that will update it.
To load or update the data into a materialized view, you use the
REFRESH MATERIALIZED VIEW
statement:REFRESH MATERIALIZED VIEW view_name;
When you refresh data for a materialized view, PostgreSQL locks the underlying tables. Consequently, you will not be able to retrieve data from underlying tables while data is loading into the view.
To avoid this, you can use the
CONCURRENTLY
option.REFRESH MATERIALIZED VIEW CONCURRENTLY view_name;
With the
CONCURRENTLY
option, PostgreSQL creates a temporary updated version of the materialized view, compares two versions, and performsINSERT
andUPDATE
only the differences.PostgreSQL allows you to retrieve data from a materialized view while it is being updated. One requirement for using
CONCURRENTLY
option is that the materialized view must have aUNIQUE
index.Automatic update of materialized views
Removing materialized views
To remove a materialized view, you use the
DROP MATERIALIZED VIEW
statement:DROP MATERIALIZED VIEW view_name;
In this syntax, you specify the name of the materialized view that you want to drop after the
DROP MATERIALIZED VIEW
keywords.Materialized view example We’ll use the tables in the sample database for creating a materialized view.
First, create a materialized view named
rental_by_category
using theCREATE MATERIALIZED VIEW
statement:CREATE MATERIALIZED VIEW rental_by_category AS SELECT c.name AS category, sum(p.amount) AS total_sales FROM (((((payment p JOIN rental r ON ((p.rental_id = r.rental_id))) JOIN inventory i ON ((r.inventory_id = i.inventory_id))) JOIN film f ON ((i.film_id = f.film_id))) JOIN film_category fc ON ((f.film_id = fc.film_id))) JOIN category c ON ((fc.category_id = c.category_id))) GROUP BY c.name ORDER BY sum(p.amount) DESC WITH NO DATA;
Because of the
WITH NO DATA
option, you cannot query data from the view. If you attempt to do so, you’ll get the following error message:SELECT * FROM rental_by_category;
Output:
[Err] ERROR: materialized view "rental_by_category" has not been populated HINT: Use the REFRESH MATERIALIZED VIEW command.
PostgreSQL is helpful to give you a hint to ask for loading data into the view.
Second, load data into the materialized view using the
REFRESH MATERIALIZED VIEW
statement:REFRESH MATERIALIZED VIEW rental_by_category;
Third, retrieve data from the materialized view:
SELECT * FROM rental_by_category;
Output:
category | total_sales -------------+------------- Sports | 4892.19 Sci-Fi | 4336.01 Animation | 4245.31 Drama | 4118.46 Comedy | 4002.48 New | 3966.38 Action | 3951.84 Foreign | 3934.47 Games | 3922.18 Family | 3830.15 Documentary | 3749.65 Horror | 3401.27 Classics | 3353.38 Children | 3309.39 Travel | 3227.36 Music | 3071.52 (16 rows)
From now on, you can refresh the data in the
rental_by_category
view using theREFRESH MATERIALIZED VIEW
statement.However, to refresh it with
CONCURRENTLY
option, you need to create aUNIQUE
index for the view first.CREATE UNIQUE INDEX rental_category ON rental_by_category (category);
Let’s refresh data concurrently for the
rental_by_category
view.REFRESH MATERIALIZED VIEW CONCURRENTLY rental_by_category;
Operating Systems⚑
Linux⚑
-
New: Linux loki alerts.
- alert: TooManyLogs expr: | sum by(hostname) (count_over_time({job="systemd-journal"} [1d])) / sum by(hostname) (count_over_time({job="systemd-journal"} [1d] offset 1d)) > 1.5 for: 0m labels: severity: warning annotations: summary: "The server {{ $labels.hostname}} is generating too many logs" - alert: TooFewLogs expr: | sum by(hostname) (count_over_time({job="systemd-journal"} [1d])) / sum by(hostname) (count_over_time({job="systemd-journal"} [1d] offset 1d)) < 0.5 for: 0m labels: severity: warning annotations: summary: "The server {{ $labels.hostname}} is generating too few logs"
Linux Snippets⚑
-
New: Clean the logs of a unit of the journal.
journalctl --vacuum-time=1s --unit=your.service
If you wish to clear all logs use
journalctl --vacuum-time=1s
Unpackerr⚑
-
New: Completed item still waiting no extractable files found at.
This trace in the logs (which is super noisy) is not to worry.
Unpackerr is just telling you something is stuck in your sonar queue. It's not an error, and it's not trying to extract it (because it has no compressed files). The fix is to figure out why it's stuck in the queue.