Thread: pg_restore takes more time on creation of rules
Hey,
I'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc).
Each dump contains only one database.
The sizes :
A-10GB
B-20GB
C-5GB.
For unclear reason the restore of the third database is taking alot of time. It isnt stuck but it continues creating db rules. This database has more then 400K rules.
I changed a few postgresql.conf parameters :
shared_buffers = 2GB
effective_cache_size = 65GB
checkpoint_segments =20
checkpoint_completion_target = 0.9
maintenance_work_mem = 10GB
checkpoint_timeout=30min
work_mem=64MB
autovacuum = off
full_page_writes=off
wal_buffers=50MB
effective_cache_size = 65GB
checkpoint_segments =20
checkpoint_completion_target = 0.9
maintenance_work_mem = 10GB
checkpoint_timeout=30min
work_mem=64MB
autovacuum = off
full_page_writes=off
wal_buffers=50MB
my machine has 31 cpu and 130GB of ram.
Any idea why the restore of the two dbs takes about 15 minutes while the third db which is the smallest takes more than 1 hour ?
I restore the dump with pg_restore with 5 jobs (-j).
I know that it is an old version, just trying to help..
On Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote: >Hey, >I'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc). >Each dump contains only one database. >The sizes : >A-10GB >B-20GB >C-5GB. > >For unclear reason the restore of the third database is taking alot of >time. It isnt stuck but it continues creating db rules. This database has >more then 400K rules. > What do you mean by "rules"? >I changed a few postgresql.conf parameters : >shared_buffers = 2GB >effective_cache_size = 65GB >checkpoint_segments =20 >checkpoint_completion_target = 0.9 >maintenance_work_mem = 10GB >checkpoint_timeout=30min >work_mem=64MB >autovacuum = off >full_page_writes=off >wal_buffers=50MB > >my machine has 31 cpu and 130GB of ram. > >Any idea why the restore of the two dbs takes about 15 minutes while the >third db which is the smallest takes more than 1 hour ? I restore the >dump with pg_restore with 5 jobs (-j). > Well, presumably the third database has complexity in other places, possibly spending a lot of time on CPU, while the other databases don't have such issue. What would help is a CPU profile, e.g. from perf. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote: >Hey, >I'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc). >Each dump contains only one database. >The sizes : >A-10GB >B-20GB >C-5GB. > >For unclear reason the restore of the third database is taking alot of >time. It isnt stuck but it continues creating db rules. This database has >more then 400K rules. > What do you mean by "rules"? >I changed a few postgresql.conf parameters : >shared_buffers = 2GB >effective_cache_size = 65GB >checkpoint_segments =20 >checkpoint_completion_target = 0.9 >maintenance_work_mem = 10GB >checkpoint_timeout=30min >work_mem=64MB >autovacuum = off >full_page_writes=off >wal_buffers=50MB > >my machine has 31 cpu and 130GB of ram. > >Any idea why the restore of the two dbs takes about 15 minutes while the >third db which is the smallest takes more than 1 hour ? I restore the >dump with pg_restore with 5 jobs (-j). > Well, presumably the third database has complexity in other places, possibly spending a lot of time on CPU, while the other databases don't have such issue. What would help is a CPU profile, e.g. from perf. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
By rules I mean DB rules (simillar to triggers but different)
בתאריך יום ד׳, 22 במאי 2019 ב-18:41 מאת Tomas Vondra <tomas.vondra@2ndquadrant.com>:
On Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote:
>Hey,
>I'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc).
>Each dump contains only one database.
>The sizes :
>A-10GB
>B-20GB
>C-5GB.
>
>For unclear reason the restore of the third database is taking alot of
>time. It isnt stuck but it continues creating db rules. This database has
>more then 400K rules.
>
What do you mean by "rules"?
>I changed a few postgresql.conf parameters :
>shared_buffers = 2GB
>effective_cache_size = 65GB
>checkpoint_segments =20
>checkpoint_completion_target = 0.9
>maintenance_work_mem = 10GB
>checkpoint_timeout=30min
>work_mem=64MB
>autovacuum = off
>full_page_writes=off
>wal_buffers=50MB
>
>my machine has 31 cpu and 130GB of ram.
>
>Any idea why the restore of the two dbs takes about 15 minutes while the
>third db which is the smallest takes more than 1 hour ? I restore the
>dump with pg_restore with 5 jobs (-j).
>
Well, presumably the third database has complexity in other places,
possibly spending a lot of time on CPU, while the other databases don't
have such issue.
What would help is a CPU profile, e.g. from perf.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
By rules I mean DB rules (simillar to triggers but different)
בתאריך יום ד׳, 22 במאי 2019 ב-18:41 מאת Tomas Vondra <tomas.vondra@2ndquadrant.com>:
On Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote:
>Hey,
>I'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc).
>Each dump contains only one database.
>The sizes :
>A-10GB
>B-20GB
>C-5GB.
>
>For unclear reason the restore of the third database is taking alot of
>time. It isnt stuck but it continues creating db rules. This database has
>more then 400K rules.
>
What do you mean by "rules"?
>I changed a few postgresql.conf parameters :
>shared_buffers = 2GB
>effective_cache_size = 65GB
>checkpoint_segments =20
>checkpoint_completion_target = 0.9
>maintenance_work_mem = 10GB
>checkpoint_timeout=30min
>work_mem=64MB
>autovacuum = off
>full_page_writes=off
>wal_buffers=50MB
>
>my machine has 31 cpu and 130GB of ram.
>
>Any idea why the restore of the two dbs takes about 15 minutes while the
>third db which is the smallest takes more than 1 hour ? I restore the
>dump with pg_restore with 5 jobs (-j).
>
Well, presumably the third database has complexity in other places,
possibly spending a lot of time on CPU, while the other databases don't
have such issue.
What would help is a CPU profile, e.g. from perf.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, May 22, 2019 at 06:44:29PM +0300, Mariel Cherkassky wrote: >By rules I mean DB rules (simillar to triggers but different) > I very much doubt such high number of rules was expected during the design (especially if it's on a single table), so perhaps there's an O(N^2) piece of code somewhere. I suggest you do a bit of profiling, for example using perf [1], which would show where the time is spent. [1] https://wiki.postgresql.org/wiki/Profiling_with_perf And please stop top-posting, it makes it much harder to follow the discussion. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, May 22, 2019 at 06:44:29PM +0300, Mariel Cherkassky wrote: >By rules I mean DB rules (simillar to triggers but different) > I very much doubt such high number of rules was expected during the design (especially if it's on a single table), so perhaps there's an O(N^2) piece of code somewhere. I suggest you do a bit of profiling, for example using perf [1], which would show where the time is spent. [1] https://wiki.postgresql.org/wiki/Profiling_with_perf And please stop top-posting, it makes it much harder to follow the discussion. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services