RE: Autovacuum, dead tuples and bloat - Mailing list pgsql-general
From | Shenavai, Manuel |
---|---|
Subject | RE: Autovacuum, dead tuples and bloat |
Date | |
Msg-id | AM9PR02MB7410AA1DEA442AED52FBB5E9E8C92@AM9PR02MB7410.eurprd02.prod.outlook.com Whole thread Raw |
In response to | Re: Autovacuum, dead tuples and bloat (Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>) |
Responses |
RE: Autovacuum, dead tuples and bloat
Re: Autovacuum, dead tuples and bloat |
List | pgsql-general |
Hi,
Thanks for the suggestions. I found the following details to our autovacuum (see below). The related toast-table of my table shows some logs related the vacuum. This toast seems to consume all the data (27544451 pages * 8kb ≈ 210GB )
Any thoughts on this?
Best regards,
Manuel
Autovacuum details
Details from pg_stat_all_tables:
{
"analyze_count": 0,
"autoanalyze_count": 11,
"autovacuum_count": 60,
"idx_scan": 1925218,
"idx_tup_fetch": 1836820,
"last_analyze": null,
"last_autoanalyze": "2024-06-19T09:39:50.680818+00:00",
"last_autovacuum": "2024-06-19T09:41:50.58592+00:00",
"last_vacuum": null,
"n_dead_tup": 120,
"n_live_tup": 9004,
"n_mod_since_analyze": 474,
"n_tup_del": 84,
"n_tup_hot_upd": 5,
"n_tup_ins": 118,
"n_tup_upd": 15180,
"relid": "27236",
"relname": "my_tablename",
"schemaname": "public",
"seq_scan": 2370,
"seq_tup_read": 18403231,
"vacuum_count": 0
}
From the server logs, I found autocacuum details for my toast table (pg_toast_27236):
{
"category": "PostgreSQLLogs",
"operationName": "LogEvent",
"properties": {
"errorLevel": "LOG",
"message": "2024-06-19 17:45:02 UTC-66731911.22f2-LOG: automatic vacuum of table \"0ecf0241-aab3-45d5-b020-e586364f810c.pg_toast.pg_toast_27236\":
index scans: 1
pages: 0 removed, 27544451 remain, 0 skipped due to pins, 27406469 skipped frozen
tuples: 9380 removed, 819294 remain, 0 are dead but not yet removable, oldest xmin: 654973054
buffer usage: 318308 hits, 311886 misses, 2708 dirtied
avg read rate: 183.934 MB/s, avg write rate: 1.597 MB/s
system usage: CPU: user: 1.47 s, system: 1.43 s, elapsed: 13.24 s",
"processId": 8946,
"sqlerrcode": "00000",
"timestamp": "2024-06-19 17:45:02.564 UTC"
},
"time": "2024-06-19T17:45:02.568Z"
}
Best regards,
Manuel
From: Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>
Sent: 20 June 2024 19:10
To: pgsql-general@lists.postgresql.org
Subject: Re: Autovacuum, dead tuples and bloat
You don't often get email from a.mantzios@cloud.gatewaynet.com. Learn why this is important |
Στις 20/6/24 19:46, ο/η Shenavai, Manuel έγραψε:
Hi everyone,
we can see in our database, that the DB is 200GB of size, with 99% bloat. After vacuum full the DB decreases to 2GB.
DB total size: 200GB
DB bloat: 198 GB
DB non-bloat: 2GB
We further see, that during bulk updates (i.e. a long running transaction), the DB is still growing, i.e. the size of the DB growth by +20GB after the bulk updates.
My assumption is, that after an autovacuum, the 99% bloat should be available for usage again. But the DB size would stay at 200GB. In our case, I would only expect a growth of the DB, if the bulk-updates exceed the current DB size (i.e. 220 GB).
How could I verify my assumption?
I think of two possibilities:
- My assumption is wrong and for some reason the dead tuples are not cleaned so that the space cannot be reused
- The bulk-update indeed exceeds the current DB size. (Then the growth is expected).
Your only assumption should be the official manual, and other material such as books, articles from reputable sources, even reading the source as a last resort could be considered.
For starters : do you have autovacuum enabled ? If not, then enable this.
Then monitor for vacuum via pg_stat_user_tables, locate the tables that you would expect vacuum to have happened but did not, then consider autovacuum tuning.
Watch the logs for lines such as :
<N> dead row
versions cannot be removed yet, oldest xmin: <some xid>
those
are held from being marked as removed, due to being visible by long running transactions. Monitor for those transactions.
You
have to monitor (if this is the case) about autovacuum being killed and not allowed to do its job.
Can you help me to verify these assumptions? Are there any statistics available that could help me with my verification?
Thanks in advance &
Best regards,
Manuel
--
Achilleas Mantzios
IT DEV - HEAD
IT DEPT
Dynacom Tankers Mgmt (as agents only)
pgsql-general by date: