Thread: Autovacuum, dead tuples and bloat
Hi everyone,
we can see in our database, that the DB is 200GB of size, with 99% bloat. After vacuum full the DB decreases to 2GB.
DB total size: 200GB
DB bloat: 198 GB
DB non-bloat: 2GB
We further see, that during bulk updates (i.e. a long running transaction), the DB is still growing, i.e. the size of the DB growth by +20GB after the bulk updates.
My assumption is, that after an autovacuum, the 99% bloat should be available for usage again. But the DB size would stay at 200GB. In our case, I would only expect a growth of the DB, if the bulk-updates exceed the current DB size (i.e. 220 GB).
How could I verify my assumption?
I think of two possibilities:
- My assumption is wrong and for some reason the dead tuples are not cleaned so that the space cannot be reused
- The bulk-update indeed exceeds the current DB size. (Then the growth is expected).
Can you help me to verify these assumptions? Are there any statistics available that could help me with my verification?
Thanks in advance &
Best regards,
Manuel
On 6/20/24 09:46, Shenavai, Manuel wrote: > Hi everyone, > > we can see in our database, that the DB is 200GB of size, with 99% > bloat. After vacuum full the DB decreases to 2GB. > > DB total size: 200GB > > DB bloat: 198 GB > > DB non-bloat: 2GB > > We further see, that during bulk updates (i.e. a long running > transaction), the DB is still growing, i.e. the size of the DB growth by > +20GB after the bulk updates. How soon after the updates did you measure the above? > > My assumption is, that after an autovacuum, the 99% bloat should be > available for usage again. But the DB size would stay at 200GB. In our > case, I would only expect a growth of the DB, if the bulk-updates exceed > the current DB size (i.e. 220 GB). Was the transaction completed(commit/rollback)? Are there other transactions using the table or tables? > > How could I verify my assumption? > > I think of two possibilities: > > 1. My assumption is wrong and for some reason the dead tuples are not > cleaned so that the space cannot be reused > 2. The bulk-update indeed exceeds the current DB size. (Then the growth > is expected). > > Can you help me to verify these assumptions? Are there any statistics > available that could help me with my verification? Use: https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ALL-TABLES-VIEW Select the rows that cover the table or tables involved. Look at the vacuum/autovacuum/analyze fields. > > Thanks in advance & > > Best regards, > > Manuel > -- Adrian Klaver adrian.klaver@aklaver.com
Hi everyone,
we can see in our database, that the DB is 200GB of size, with 99% bloat. After vacuum full the DB decreases to 2GB.
DB total size: 200GB
DB bloat: 198 GB
DB non-bloat: 2GB
We further see, that during bulk updates (i.e. a long running transaction), the DB is still growing, i.e. the size of the DB growth by +20GB after the bulk updates.
My assumption is, that after an autovacuum, the 99% bloat should be available for usage again. But the DB size would stay at 200GB. In our case, I would only expect a growth of the DB, if the bulk-updates exceed the current DB size (i.e. 220 GB).
How could I verify my assumption?
I think of two possibilities:
- My assumption is wrong and for some reason the dead tuples are not cleaned so that the space cannot be reused
- The bulk-update indeed exceeds the current DB size. (Then the growth is expected).
Can you help me to verify these assumptions? Are there any statistics available that could help me with my verification?
@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;}@font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;}p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ligatures:standardcontextual; mso-fareast-language:EN-US;}p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph {mso-style-priority:34; margin-top:0cm; margin-right:0cm; margin-bottom:0cm; margin-left:36.0pt; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ligatures:standardcontextual; mso-fareast-language:EN-US;}span.EmailStyle17 {mso-style-type:personal-compose; font-family:"Calibri",sans-serif; color:windowtext;}.MsoChpDefault {mso-style-type:export-only; font-family:"Calibri",sans-serif; mso-fareast-language:EN-US;}div.WordSection1 {page:WordSection1;}ol {margin-bottom:0cm;}ul {margin-bottom:0cm;} Hi everyone,
we can see in our database, that the DB is 200GB of size, with 99% bloat. After vacuum full the DB decreases to 2GB.
DB total size: 200GB
DB bloat: 198 GB
DB non-bloat: 2GB
We further see, that during bulk updates (i.e. a long running transaction), the DB is still growing, i.e. the size of the DB growth by +20GB after the bulk updates.
My assumption is, that after an autovacuum, the 99% bloat should be available for usage again. But the DB size would stay at 200GB. In our case, I would only expect a growth of the DB, if the bulk-updates exceed the current DB size (i.e. 220 GB).
How could I verify my assumption?
I think of two possibilities:
- My assumption is wrong and for some reason the dead tuples are not cleaned so that the space cannot be reused
- The bulk-update indeed exceeds the current DB size. (Then the growth is expected).
Your only assumption should be the official manual, and other material such as books, articles from reputable sources, even reading the source as a last resort could be considered.
For starters : do you have autovacuum enabled ? If not, then enable this.
Then monitor for vacuum via pg_stat_user_tables, locate the tables that you would expect vacuum to have happened but did not, then consider autovacuum tuning.
Watch the logs for lines such as :
<N> dead row versions cannot be removed yet, oldest xmin: <some xid>
those are held from being marked as removed, due to being visible by long running transactions. Monitor for those transactions.
You have to monitor (if this is the case) about autovacuum being killed and not allowed to do its job.
Can you help me to verify these assumptions? Are there any statistics available that could help me with my verification?
Thanks in advance &
Best regards,
Manuel
-- Achilleas Mantzios IT DEV - HEAD IT DEPT Dynacom Tankers Mgmt (as agents only)
Hi,
Thanks for the suggestions. I found the following details to our autovacuum (see below). The related toast-table of my table shows some logs related the vacuum. This toast seems to consume all the data (27544451 pages * 8kb ≈ 210GB )
Any thoughts on this?
Best regards,
Manuel
Autovacuum details
Details from pg_stat_all_tables:
{
"analyze_count": 0,
"autoanalyze_count": 11,
"autovacuum_count": 60,
"idx_scan": 1925218,
"idx_tup_fetch": 1836820,
"last_analyze": null,
"last_autoanalyze": "2024-06-19T09:39:50.680818+00:00",
"last_autovacuum": "2024-06-19T09:41:50.58592+00:00",
"last_vacuum": null,
"n_dead_tup": 120,
"n_live_tup": 9004,
"n_mod_since_analyze": 474,
"n_tup_del": 84,
"n_tup_hot_upd": 5,
"n_tup_ins": 118,
"n_tup_upd": 15180,
"relid": "27236",
"relname": "my_tablename",
"schemaname": "public",
"seq_scan": 2370,
"seq_tup_read": 18403231,
"vacuum_count": 0
}
From the server logs, I found autocacuum details for my toast table (pg_toast_27236):
{
"category": "PostgreSQLLogs",
"operationName": "LogEvent",
"properties": {
"errorLevel": "LOG",
"message": "2024-06-19 17:45:02 UTC-66731911.22f2-LOG: automatic vacuum of table \"0ecf0241-aab3-45d5-b020-e586364f810c.pg_toast.pg_toast_27236\":
index scans: 1
pages: 0 removed, 27544451 remain, 0 skipped due to pins, 27406469 skipped frozen
tuples: 9380 removed, 819294 remain, 0 are dead but not yet removable, oldest xmin: 654973054
buffer usage: 318308 hits, 311886 misses, 2708 dirtied
avg read rate: 183.934 MB/s, avg write rate: 1.597 MB/s
system usage: CPU: user: 1.47 s, system: 1.43 s, elapsed: 13.24 s",
"processId": 8946,
"sqlerrcode": "00000",
"timestamp": "2024-06-19 17:45:02.564 UTC"
},
"time": "2024-06-19T17:45:02.568Z"
}
Best regards,
Manuel
From: Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>
Sent: 20 June 2024 19:10
To: pgsql-general@lists.postgresql.org
Subject: Re: Autovacuum, dead tuples and bloat
You don't often get email from a.mantzios@cloud.gatewaynet.com. Learn why this is important |
Στις 20/6/24 19:46, ο/η Shenavai, Manuel έγραψε:
Hi everyone,
we can see in our database, that the DB is 200GB of size, with 99% bloat. After vacuum full the DB decreases to 2GB.
DB total size: 200GB
DB bloat: 198 GB
DB non-bloat: 2GB
We further see, that during bulk updates (i.e. a long running transaction), the DB is still growing, i.e. the size of the DB growth by +20GB after the bulk updates.
My assumption is, that after an autovacuum, the 99% bloat should be available for usage again. But the DB size would stay at 200GB. In our case, I would only expect a growth of the DB, if the bulk-updates exceed the current DB size (i.e. 220 GB).
How could I verify my assumption?
I think of two possibilities:
- My assumption is wrong and for some reason the dead tuples are not cleaned so that the space cannot be reused
- The bulk-update indeed exceeds the current DB size. (Then the growth is expected).
Your only assumption should be the official manual, and other material such as books, articles from reputable sources, even reading the source as a last resort could be considered.
For starters : do you have autovacuum enabled ? If not, then enable this.
Then monitor for vacuum via pg_stat_user_tables, locate the tables that you would expect vacuum to have happened but did not, then consider autovacuum tuning.
Watch the logs for lines such as :
<N> dead row
versions cannot be removed yet, oldest xmin: <some xid>
those
are held from being marked as removed, due to being visible by long running transactions. Monitor for those transactions.
You
have to monitor (if this is the case) about autovacuum being killed and not allowed to do its job.
Can you help me to verify these assumptions? Are there any statistics available that could help me with my verification?
Thanks in advance &
Best regards,
Manuel
--
Achilleas Mantzios
IT DEV - HEAD
IT DEPT
Dynacom Tankers Mgmt (as agents only)
Here some more details related to the toast table:
{
"analyze_count": 0,
"autoanalyze_count": 0,
"autovacuum_count": 22,
"idx_scan": 1464881,
"idx_tup_fetch": 363681753,
"last_analyze": null,
"last_autoanalyze": null,
"last_autovacuum": "2024-06-19T17:45:02.564937+00:00",
"last_vacuum": null,
"n_dead_tup": 12,
"n_live_tup": 819294,
"n_mod_since_analyze": 225250407,
"n_tup_del": 112615126,
"n_tup_hot_upd": 0,
"n_tup_ins": 112635281,
"n_tup_upd": 0,
"relid": "27240",
"relname": "pg_toast_27236",
"schemaname": "pg_toast",
"seq_scan": 0,
"seq_tup_read": 0,
"vacuum_count": 0
}
From: Shenavai, Manuel <manuel.shenavai@sap.com>
Sent: 21 June 2024 21:31
To: Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org
Subject: RE: Autovacuum, dead tuples and bloat
Hi,
Thanks for the suggestions. I found the following details to our autovacuum (see below). The related toast-table of my table shows some logs related the vacuum. This toast seems to consume all the data (27544451 pages * 8kb ≈ 210GB )
Any thoughts on this?
Best regards,
Manuel
Autovacuum details
Details from pg_stat_all_tables:
{
"analyze_count": 0,
"autoanalyze_count": 11,
"autovacuum_count": 60,
"idx_scan": 1925218,
"idx_tup_fetch": 1836820,
"last_analyze": null,
"last_autoanalyze": "2024-06-19T09:39:50.680818+00:00",
"last_autovacuum": "2024-06-19T09:41:50.58592+00:00",
"last_vacuum": null,
"n_dead_tup": 120,
"n_live_tup": 9004,
"n_mod_since_analyze": 474,
"n_tup_del": 84,
"n_tup_hot_upd": 5,
"n_tup_ins": 118,
"n_tup_upd": 15180,
"relid": "27236",
"relname": "my_tablename",
"schemaname": "public",
"seq_scan": 2370,
"seq_tup_read": 18403231,
"vacuum_count": 0
}
From the server logs, I found autocacuum details for my toast table (pg_toast_27236):
{
"category": "PostgreSQLLogs",
"operationName": "LogEvent",
"properties": {
"errorLevel": "LOG",
"message": "2024-06-19 17:45:02 UTC-66731911.22f2-LOG: automatic vacuum of table \"0ecf0241-aab3-45d5-b020-e586364f810c.pg_toast.pg_toast_27236\":
index scans: 1
pages: 0 removed, 27544451 remain, 0 skipped due to pins, 27406469 skipped frozen
tuples: 9380 removed, 819294 remain, 0 are dead but not yet removable, oldest xmin: 654973054
buffer usage: 318308 hits, 311886 misses, 2708 dirtied
avg read rate: 183.934 MB/s, avg write rate: 1.597 MB/s
system usage: CPU: user: 1.47 s, system: 1.43 s, elapsed: 13.24 s",
"processId": 8946,
"sqlerrcode": "00000",
"timestamp": "2024-06-19 17:45:02.564 UTC"
},
"time": "2024-06-19T17:45:02.568Z"
}
Best regards,
Manuel
From: Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>
Sent: 20 June 2024 19:10
To: pgsql-general@lists.postgresql.org
Subject: Re: Autovacuum, dead tuples and bloat
You don't often get email from a.mantzios@cloud.gatewaynet.com. Learn why this is important |
Στις 20/6/24 19:46, ο/η Shenavai, Manuel έγραψε:
Hi everyone,
we can see in our database, that the DB is 200GB of size, with 99% bloat. After vacuum full the DB decreases to 2GB.
DB total size: 200GB
DB bloat: 198 GB
DB non-bloat: 2GB
We further see, that during bulk updates (i.e. a long running transaction), the DB is still growing, i.e. the size of the DB growth by +20GB after the bulk updates.
My assumption is, that after an autovacuum, the 99% bloat should be available for usage again. But the DB size would stay at 200GB. In our case, I would only expect a growth of the DB, if the bulk-updates exceed the current DB size (i.e. 220 GB).
How could I verify my assumption?
I think of two possibilities:
- My assumption is wrong and for some reason the dead tuples are not cleaned so that the space cannot be reused
- The bulk-update indeed exceeds the current DB size. (Then the growth is expected).
Your only assumption should be the official manual, and other material such as books, articles from reputable sources, even reading the source as a last resort could be considered.
For starters : do you have autovacuum enabled ? If not, then enable this.
Then monitor for vacuum via pg_stat_user_tables, locate the tables that you would expect vacuum to have happened but did not, then consider autovacuum tuning.
Watch the logs for lines such as :
<N> dead row
versions cannot be removed yet, oldest xmin: <some xid>
those
are held from being marked as removed, due to being visible by long running transactions. Monitor for those transactions.
You
have to monitor (if this is the case) about autovacuum being killed and not allowed to do its job.
Can you help me to verify these assumptions? Are there any statistics available that could help me with my verification?
Thanks in advance &
Best regards,
Manuel
--
Achilleas Mantzios
IT DEV - HEAD
IT DEPT
Dynacom Tankers Mgmt (as agents only)
On 6/21/24 12:31, Shenavai, Manuel wrote: > Hi, > > Thanks for the suggestions. I found the following details to our > autovacuum (see below). The related toast-table of my table shows some > logs related the vacuum. This toast seems to consume all the data > (27544451 pages * 8kb ≈ 210GB ) Those tuples(pages) are still live per the pg_stat entry in your second post: "n_dead_tup": 12, "n_live_tup": 819294 So they are needed. Now the question is why are they needed? 1) All transactions that touch that table are done and that is the data that is left. 2) There are open transactions that still need to 'see' that data and autovacuum cannot remove them yet. Take a look at: pg_stat_activity: https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW and pg_locks https://www.postgresql.org/docs/current/view-pg-locks.html to see if there is a process holding that data open. > > Any thoughts on this? > > Best regards, > Manuel > -- Adrian Klaver adrian.klaver@aklaver.com
Thanks for the suggestion. This is what I found: - pg_locks shows only one entry for my DB (I filtered by db oid). The entry is related to the relation "pg_locks" (AccessShareLock). - pg_stat_activity shows ~30 connections (since the DB is in use, this is expected) Is there anything specific I should further look into in these tables? Regarding my last post: Did we see a problem in the logs I provided in my previous post? We have seen that there are 819294n_live_tup in the toast-table. Do we know how much space these tuple use? Do we know how much space one tuple use? Best regards, Manuel -----Original Message----- From: Adrian Klaver <adrian.klaver@aklaver.com> Sent: 21 June 2024 22:39 To: Shenavai, Manuel <manuel.shenavai@sap.com>; Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org Subject: Re: Autovacuum, dead tuples and bloat On 6/21/24 12:31, Shenavai, Manuel wrote: > Hi, > > Thanks for the suggestions. I found the following details to our > autovacuum (see below). The related toast-table of my table shows some > logs related the vacuum. This toast seems to consume all the data > (27544451 pages * 8kb ≈ 210GB ) Those tuples(pages) are still live per the pg_stat entry in your second post: "n_dead_tup": 12, "n_live_tup": 819294 So they are needed. Now the question is why are they needed? 1) All transactions that touch that table are done and that is the data that is left. 2) There are open transactions that still need to 'see' that data and autovacuum cannot remove them yet. Take a look at: pg_stat_activity: https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW and pg_locks https://www.postgresql.org/docs/current/view-pg-locks.html to see if there is a process holding that data open. > > Any thoughts on this? > > Best regards, > Manuel > -- Adrian Klaver adrian.klaver@aklaver.com
On 6/22/24 13:13, Shenavai, Manuel wrote: > Thanks for the suggestion. This is what I found: > > - pg_locks shows only one entry for my DB (I filtered by db oid). The entry is related to the relation "pg_locks" (AccessShareLock). Which would be the SELECT you did on pg_locks. > - pg_stat_activity shows ~30 connections (since the DB is in use, this is expected) The question then is, are any of those 30 connections holding a transaction open that needs to see the data in the affected table and is keeping autovacuum from recycling the tuples? You might need to look at the Postgres logs to determine the above. Logging connections/disconnections helps as well at least 'mod' statements. See: https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT for more information. > > Is there anything specific I should further look into in these tables? > > Regarding my last post: Did we see a problem in the logs I provided in my previous post? We have seen that there are 819294n_live_tup in the toast-table. Do we know how much space these tuple use? Do we know how much space one tuple use? You will want to read: https://www.postgresql.org/docs/current/storage-toast.html Also: https://www.postgresql.org/docs/current/functions-admin.html 9.27.7. Database Object Management Functions There are functions there that show table sizes among other things. > > Best regards, > Manuel > > -----Original Message----- > From: Adrian Klaver <adrian.klaver@aklaver.com> > Sent: 21 June 2024 22:39 > To: Shenavai, Manuel <manuel.shenavai@sap.com>; Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org > Subject: Re: Autovacuum, dead tuples and bloat > > On 6/21/24 12:31, Shenavai, Manuel wrote: >> Hi, >> >> Thanks for the suggestions. I found the following details to our >> autovacuum (see below). The related toast-table of my table shows some >> logs related the vacuum. This toast seems to consume all the data >> (27544451 pages * 8kb ≈ 210GB ) > > Those tuples(pages) are still live per the pg_stat entry in your second > post: > > "n_dead_tup": 12, > "n_live_tup": 819294 > > So they are needed. > > Now the question is why are they needed? > > 1) All transactions that touch that table are done and that is the data > that is left. > > 2) There are open transactions that still need to 'see' that data and > autovacuum cannot remove them yet. Take a look at: > > pg_stat_activity: > > https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW > > and > > pg_locks > > https://www.postgresql.org/docs/current/view-pg-locks.html > > to see if there is a process holding that data open. > >> >> Any thoughts on this? >> >> Best regards, >> Manuel >> > > > -- > Adrian Klaver > adrian.klaver@aklaver.com > -- Adrian Klaver adrian.klaver@aklaver.com
Thanks for the suggestions. I checked pg_locks shows and pg_stat_activity but I could not find a LOCK or an transaction on this (at this point in time). I assume that this problem may relate to long running transactions which write a lot of data. Is there already somethingin place that would help me to: 1) identify long running transactions 2) get an idea of the data-volume a single transaction writes? I tested the log_statement='mod' but this writes too much data (including all payloads). I rather would like to get a summaryentry of each transaction like: "Tx 4752 run for 1hour and 1GB data was written." Is there something like this already available in postgres? Best regards, Manuel -----Original Message----- From: Adrian Klaver <adrian.klaver@aklaver.com> Sent: 22 June 2024 23:17 To: Shenavai, Manuel <manuel.shenavai@sap.com>; Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org Subject: Re: Autovacuum, dead tuples and bloat On 6/22/24 13:13, Shenavai, Manuel wrote: > Thanks for the suggestion. This is what I found: > > - pg_locks shows only one entry for my DB (I filtered by db oid). The entry is related to the relation "pg_locks" (AccessShareLock). Which would be the SELECT you did on pg_locks. > - pg_stat_activity shows ~30 connections (since the DB is in use, this is expected) The question then is, are any of those 30 connections holding a transaction open that needs to see the data in the affected table and is keeping autovacuum from recycling the tuples? You might need to look at the Postgres logs to determine the above. Logging connections/disconnections helps as well at least 'mod' statements. See: https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT for more information. > > Is there anything specific I should further look into in these tables? > > Regarding my last post: Did we see a problem in the logs I provided in my previous post? We have seen that there are 819294n_live_tup in the toast-table. Do we know how much space these tuple use? Do we know how much space one tuple use? You will want to read: https://www.postgresql.org/docs/current/storage-toast.html Also: https://www.postgresql.org/docs/current/functions-admin.html 9.27.7. Database Object Management Functions There are functions there that show table sizes among other things. > > Best regards, > Manuel > > -----Original Message----- > From: Adrian Klaver <adrian.klaver@aklaver.com> > Sent: 21 June 2024 22:39 > To: Shenavai, Manuel <manuel.shenavai@sap.com>; Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org > Subject: Re: Autovacuum, dead tuples and bloat > > On 6/21/24 12:31, Shenavai, Manuel wrote: >> Hi, >> >> Thanks for the suggestions. I found the following details to our >> autovacuum (see below). The related toast-table of my table shows some >> logs related the vacuum. This toast seems to consume all the data >> (27544451 pages * 8kb ≈ 210GB ) > > Those tuples(pages) are still live per the pg_stat entry in your second > post: > > "n_dead_tup": 12, > "n_live_tup": 819294 > > So they are needed. > > Now the question is why are they needed? > > 1) All transactions that touch that table are done and that is the data > that is left. > > 2) There are open transactions that still need to 'see' that data and > autovacuum cannot remove them yet. Take a look at: > > pg_stat_activity: > > https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW > > and > > pg_locks > > https://www.postgresql.org/docs/current/view-pg-locks.html > > to see if there is a process holding that data open. > >> >> Any thoughts on this? >> >> Best regards, >> Manuel >> > > > -- > Adrian Klaver > adrian.klaver@aklaver.com > -- Adrian Klaver adrian.klaver@aklaver.com
Thanks for the suggestions.
I checked pg_locks shows and pg_stat_activity but I could not find a LOCK or an transaction on this (at this point in time).
I assume that this problem may relate to long running transactions which write a lot of data. Is there already something in place that would help me to:
1) identify long running transactions
2) get an idea of the data-volume a single transaction writes?
I tested the log_statement='mod' but this writes too much data (including all payloads). I rather would like to get a summary entry of each transaction like:
"Tx 4752 run for 1hour and 1GB data was written."
Is there something like this already available in postgres?
Best regards,
Manuel
-----Original Message-----
From: Adrian Klaver <adrian.klaver@aklaver.com>
Sent: 22 June 2024 23:17
To: Shenavai, Manuel <manuel.shenavai@sap.com>; Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org
Subject: Re: Autovacuum, dead tuples and bloat
On 6/22/24 13:13, Shenavai, Manuel wrote:
> Thanks for the suggestion. This is what I found:
>
> - pg_locks shows only one entry for my DB (I filtered by db oid). The entry is related to the relation "pg_locks" (AccessShareLock).
Which would be the SELECT you did on pg_locks.
> - pg_stat_activity shows ~30 connections (since the DB is in use, this is expected)
The question then is, are any of those 30 connections holding a
transaction open that needs to see the data in the affected table and is
keeping autovacuum from recycling the tuples?
You might need to look at the Postgres logs to determine the above.
Logging connections/disconnections helps as well at least 'mod' statements.
See:
https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
for more information.
>
> Is there anything specific I should further look into in these tables?
>
> Regarding my last post: Did we see a problem in the logs I provided in my previous post? We have seen that there are 819294 n_live_tup in the toast-table. Do we know how much space these tuple use? Do we know how much space one tuple use?
You will want to read:
https://www.postgresql.org/docs/current/storage-toast.html
Also:
https://www.postgresql.org/docs/current/functions-admin.html
9.27.7. Database Object Management Functions
There are functions there that show table sizes among other things.
>
> Best regards,
> Manuel
>
> -----Original Message-----
> From: Adrian Klaver <adrian.klaver@aklaver.com>
> Sent: 21 June 2024 22:39
> To: Shenavai, Manuel <manuel.shenavai@sap.com>; Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>; pgsql-general@lists.postgresql.org
> Subject: Re: Autovacuum, dead tuples and bloat
>
> On 6/21/24 12:31, Shenavai, Manuel wrote:
>> Hi,
>>
>> Thanks for the suggestions. I found the following details to our
>> autovacuum (see below). The related toast-table of my table shows some
>> logs related the vacuum. This toast seems to consume all the data
>> (27544451 pages * 8kb ≈ 210GB )
>
> Those tuples(pages) are still live per the pg_stat entry in your second
> post:
>
> "n_dead_tup": 12,
> "n_live_tup": 819294
>
> So they are needed.
>
> Now the question is why are they needed?
>
> 1) All transactions that touch that table are done and that is the data
> that is left.
>
> 2) There are open transactions that still need to 'see' that data and
> autovacuum cannot remove them yet. Take a look at:
>
> pg_stat_activity:
>
> https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW
>
> and
>
> pg_locks
>
> https://www.postgresql.org/docs/current/view-pg-locks.html
>
> to see if there is a process holding that data open.
>
>>
>> Any thoughts on this?
>>
>> Best regards,
>> Manuel
>>
>
>
> --
> Adrian Klaver
> adrian.klaver@aklaver.com
>
--
Adrian Klaver
adrian.klaver@aklaver.com
On 6/26/24 00:03, Shenavai, Manuel wrote: > Thanks for the suggestions. > I checked pg_locks shows and pg_stat_activity but I could not find a LOCK or an transaction on this (at this point intime). > > I assume that this problem may relate to long running transactions which write a lot of data. Is there already somethingin place that would help me to: > 1) identify long running transactions > 2) get an idea of the data-volume a single transaction writes? > > I tested the log_statement='mod' but this writes too much data (including all payloads). I rather would like to get a summaryentry of each transaction like: > "Tx 4752 run for 1hour and 1GB data was written." https://www.postgresql.org/docs/current/runtime-config-logging.html log_min_duration_statement Read the Note below the entry. This will log long running queries, though it will not show th amount of data written. If you want to go more in depth there is: https://www.postgresql.org/docs/current/pgstatstatements.html It is an extension that you will need to install per instructions at the link. > > Is there something like this already available in postgres? > > Best regards, > Manuel -- Adrian Klaver adrian.klaver@aklaver.com