RE: pg_dump out of memory for large table with LOB - Mailing list pgsql-general
From | Jean-Marc Lessard |
---|---|
Subject | RE: pg_dump out of memory for large table with LOB |
Date | |
Msg-id | 8FC5F25FF3EC4744ADFCF20CBA3F44BE016811C1EF@SRV-CAMTL-EXCH2.Forensictech.com Whole thread Raw |
In response to | Re: pg_dump out of memory for large table with LOB (Adrien Nayrat <adrien.nayrat@anayrat.info>) |
Responses |
Re: pg_dump out of memory for large table with LOB
Re: pg_dump out of memory for large table with LOB RE: pg_dump out of memory for large table with LOB |
List | pgsql-general |
Adrien Nayrat wrote:
> With 17 million LO, it could eat lot of memory ;)
Yes it does.
I did several tests and here are my observations.
First memory settings are:
shared_buffers = 3GB
work_mem = 32Mb
maintenance_work_mem = 1GB
effective_cache_size = 9MB
bytea_output = 'escape'
The largest LO is 160KB for the dumped table
The largest LO is 20MB for the entire DB (fare from the 1GB)
I reduced the shared_buffers to 512Mb and have the same behavior.
I increased the pagefile from 8Gb to 16Gb and the dump processed. The total memory of pg_dump reached nearly 20GB.
I killed the dump after 200GB (20 hrs). It pages aggressively and would have last 4 days to dump 800GB.
Here is the memory pattern:
pg_dump: reading large objects -> Reads pg_largeobject_metadata. Ramp up gradually from 0 to 4GB during 10mins and jump to nearly 10GB at the end.
pg_dump: reading dependency data -> drop to 7GB
pg_dump: saving encoding = UTF8
pg_dump: saving standard_conforming_strings = on -> begin paging and memory total reached 20Gb
pg_dump: dumping contents of table "ibisl1.signatures" -> begin writing to dump file
pg_dump: saving large objects -> reading pg_largeobject file nodes and writing to dump file
I performed the same test on a smaller DB (60GB)
I dump a single table with LOBs and then the entire DB (3 tables have LOBs).
- Single 1GB table: 1GB including lobs, 80 000 rows, max LOB size 100KB
pg_dump -h localhost -Fc -a -b -t signatures -v > d:\postgresql\sig.dmp
- Entire 60GB DB: 3 tables have LOBs, 240 000 LOBs, max LOB size 20MB
pg_dump -h localhost -Fc -v > d:\postgresql\db.dmp
Both dumps used the same amount of memory (160MB) and dump file sizes are more or less the same 53Gb!
We can conclude that the single table dump includes the entire pg_largeobject table, not only the LOBs respective to the table.
So why reading the large objects to build a huge structure in memory if all large objects are going to be dump.
There is certainly something to rework.
I run another test. I dump the application schema that contains the data.
pg_dump -h localhost -Fc -n ibis* -v > d:\postgresql\ibislx.dmp
The pg_dump used a few MB, so the memory issue is clearly due to LOB.
The relation between the number of large objects and dump memory looks linear ~ 650bytes/LOB
Small DB: 240 000 lobs = 0.160GB
Large DB: 28 500 000 lobs = 19.5GB
Seems a large memory allocation for processing LOB later on.
Another area where LOB hurts is the storage. LOB are broken and stored in 2K pieces.
Due to the block header, only three 2k pieces fit in an 8k block wasting 25% of space (in fact pgstattuple reports ~ 20%).
Would you recommend bytea over LOB considering that the max LOB size is well bellow 1GB?
Are bytea preferable in terms of support by the community, performance, feature, etc?
We choose saving binary data into LOB over bytea because LOB are stored out of row, and more particularly one table is frequently accessed, but not the LOB.
However, bytea would be toasted and stored out of row anyway as the binary data saved is bigger that 2k.
Currently the frequently accessed table contains 40 rows per 8k block (avg row length 200 bytes).
Can we just set the toast_tuple_target to 256 bytes for that table to get similar results?
Does toasted bytea will be stored in 2K pieces and 25% of space (or less if toast_tuple_target is changed) be wasted like for LOB?
I also presume that when a row (columns other that the bytea) is updated, a new copy of the bytea will be made in the toast table?
So LOB will be more suitable for frequently updated table where the binary data rarely change.
> With 17 million LO, it could eat lot of memory ;)
Yes it does.
I did several tests and here are my observations.
First memory settings are:
shared_buffers = 3GB
work_mem = 32Mb
maintenance_work_mem = 1GB
effective_cache_size = 9MB
bytea_output = 'escape'
The largest LO is 160KB for the dumped table
The largest LO is 20MB for the entire DB (fare from the 1GB)
I reduced the shared_buffers to 512Mb and have the same behavior.
I increased the pagefile from 8Gb to 16Gb and the dump processed. The total memory of pg_dump reached nearly 20GB.
I killed the dump after 200GB (20 hrs). It pages aggressively and would have last 4 days to dump 800GB.
Here is the memory pattern:
pg_dump: reading large objects -> Reads pg_largeobject_metadata. Ramp up gradually from 0 to 4GB during 10mins and jump to nearly 10GB at the end.
pg_dump: reading dependency data -> drop to 7GB
pg_dump: saving encoding = UTF8
pg_dump: saving standard_conforming_strings = on -> begin paging and memory total reached 20Gb
pg_dump: dumping contents of table "ibisl1.signatures" -> begin writing to dump file
pg_dump: saving large objects -> reading pg_largeobject file nodes and writing to dump file
I performed the same test on a smaller DB (60GB)
I dump a single table with LOBs and then the entire DB (3 tables have LOBs).
- Single 1GB table: 1GB including lobs, 80 000 rows, max LOB size 100KB
pg_dump -h localhost -Fc -a -b -t signatures -v > d:\postgresql\sig.dmp
- Entire 60GB DB: 3 tables have LOBs, 240 000 LOBs, max LOB size 20MB
pg_dump -h localhost -Fc -v > d:\postgresql\db.dmp
Both dumps used the same amount of memory (160MB) and dump file sizes are more or less the same 53Gb!
We can conclude that the single table dump includes the entire pg_largeobject table, not only the LOBs respective to the table.
So why reading the large objects to build a huge structure in memory if all large objects are going to be dump.
There is certainly something to rework.
I run another test. I dump the application schema that contains the data.
pg_dump -h localhost -Fc -n ibis* -v > d:\postgresql\ibislx.dmp
The pg_dump used a few MB, so the memory issue is clearly due to LOB.
The relation between the number of large objects and dump memory looks linear ~ 650bytes/LOB
Small DB: 240 000 lobs = 0.160GB
Large DB: 28 500 000 lobs = 19.5GB
Seems a large memory allocation for processing LOB later on.
Another area where LOB hurts is the storage. LOB are broken and stored in 2K pieces.
Due to the block header, only three 2k pieces fit in an 8k block wasting 25% of space (in fact pgstattuple reports ~ 20%).
Would you recommend bytea over LOB considering that the max LOB size is well bellow 1GB?
Are bytea preferable in terms of support by the community, performance, feature, etc?
We choose saving binary data into LOB over bytea because LOB are stored out of row, and more particularly one table is frequently accessed, but not the LOB.
However, bytea would be toasted and stored out of row anyway as the binary data saved is bigger that 2k.
Currently the frequently accessed table contains 40 rows per 8k block (avg row length 200 bytes).
Can we just set the toast_tuple_target to 256 bytes for that table to get similar results?
Does toasted bytea will be stored in 2K pieces and 25% of space (or less if toast_tuple_target is changed) be wasted like for LOB?
I also presume that when a row (columns other that the bytea) is updated, a new copy of the bytea will be made in the toast table?
So LOB will be more suitable for frequently updated table where the binary data rarely change.
Jean-Marc Lessard
Administrateur de base de données / Database Administrator
Ultra Electronics Forensic Technology Inc.
T +1 514 489 4247 x4164
www.ultra-forensictechnology.com
pgsql-general by date: