Thread: speed up restore from dump
Hello all, I've been tring to speed up the restore operation of my database without success. I have a 200MB dump file obtained with 'pg_dumpall --clean --oids'. After restore is produces a database with one single table (1.000.000) rows. I have also some indexes on that table. that's it. It always takes me about 20 minutes to reload the data, whatever settings I change. I have so far touched these settings: - fsync = off - shared_buffers = 24MB - temp_buffers = 24Mb - maintenance_work_mem = 128MB - full_page_writes = off - wal_writer_delay = 10000 - checkpoint_segments = 200 - checkpoint_timeout = 1800 - autovacuum = off I started with a default instalation. first I changed fsync to off, then I started touching other cfg params. but I always get around 20 minutes (21, 19, 18).... Can I expect these 20 minutes to be significantly reduced ? What other cfg paramenters shoud I touch ? Can anyone shed some light on this ? any faster approach to upgrade from 8.1 to 8.3 ? thank you Joao
On Thursday 30 October 2008, Joao Ferreira gmail <joao.miguel.c.ferreira@gmail.com> wrote: > What other cfg paramenters shoud I touch ? work_mem set to most of your free memory might help. You're probably just disk-bound, though. What does vmstat say during the restore? -- Alan
On Thu, 2008-10-30 at 11:39 -0700, Alan Hodgson wrote: > On Thursday 30 October 2008, Joao Ferreira gmail > <joao.miguel.c.ferreira@gmail.com> wrote: > > What other cfg paramenters shoud I touch ? > > work_mem set to most of your free memory might help. I've raised work_mem to 128MB. still get the same 20 minutes ! > You're probably just > disk-bound, though. What does vmstat say during the restore? During restore: # vmstat procs --------memory------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 3 1 230204 4972 1352 110128 2 1 17 63 24 56 12 2 85 0 # After restore has finished # vmstat procs --------memory-------- ---swap-- ---io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 246864 59972 2276 186420 2 1 18 63 28 56 12 2 85 0 # joao > > -- > Alan >
On Thu, Oct 30, 2008 at 07:28:57PM +0000, Joao Ferreira gmail wrote: > On Thu, 2008-10-30 at 11:39 -0700, Alan Hodgson wrote: > > You're probably just > > disk-bound, though. What does vmstat say during the restore? > > During restore: > # vmstat > procs --------memory------- ---swap-- -----io---- -system-- ----cpu---- > r b swpd free buff cache si so bi bo in cs us sy id wa > 3 1 230204 4972 1352 110128 2 1 17 63 24 56 12 2 85 0 > > > After restore has finished > # vmstat > procs --------memory-------- ---swap-- ---io---- -system-- ----cpu---- > r b swpd free buff cache si so bi bo in cs us sy id wa > 1 0 246864 59972 2276 186420 2 1 18 63 28 56 12 2 85 0 From the output you've given it doesn't look as though you left vmstat running while the processing is running, the first set of numbers it prints out are rarely representational values for the IO usage. Try running "vmstat 5" to get output every 5 seconds, you should be able to see things happening a bit more easily that way. Another tool I'd recommend is iostat, I tend to invoke it as "iostat -mx 5 /dev/sd?" to get it to print out values for each individual disk. Sam
On Thursday 30 October 2008, Joao Ferreira gmail > During restore: > # vmstat > procs --------memory------- ---swap-- -----io---- -system-- ----cpu---- > r b swpd free buff cache si so bi bo in cs us sy id wa > 3 1 230204 4972 1352 110128 2 1 17 63 24 56 12 2 85 0 > # Does that machine really have only 256MB of RAM? And it's over 200MB into swap? -- Alan
Alan Hodgson wrote: > On Thursday 30 October 2008, Joao Ferreira gmail > > During restore: > > # vmstat > > procs --------memory------- ---swap-- -----io---- -system-- ----cpu---- > > r b swpd free buff cache si so bi bo in cs us sy id wa > > 3 1 230204 4972 1352 110128 2 1 17 63 24 56 12 2 85 0 > > # > > Does that machine really have only 256MB of RAM? And it's over 200MB into > swap? Huh, if that's the case then you should drop maintenance_work_mem a lot (and not increase work_mem too much either), because having it high enough that it causes swapping leads to worse performance. -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
On Thursday 30 October 2008, Joao Ferreira <jmcferreira@critical-links.com> wrote: > well..... see for yourself... (360 RAM , 524 SWAP) that's what it is... > it supposed to be somewhat an embedded product... > Clearly your hardware is your speed limitation. If you're swapping at all, anything running on the machine is going to be slow. -- Alan
On Thu, Oct 30, 2008 at 02:28:38PM -0700, Alan Hodgson wrote: > On Thursday 30 October 2008, Joao Ferreira <jmcferreira@critical-links.com> > wrote: > > well..... see for yourself... (360 RAM , 524 SWAP) that's what it is... > > it supposed to be somewhat an embedded product... > > Clearly your hardware is your speed limitation. If you're swapping at all, > anything running on the machine is going to be slow. The vmstat output only showed the odd block going in and out; but performance is only really going to suffer when it's thrashing. If the swap in number stays in the double digits for a reasonable amount of time then you should probably look at what's causing it. Giving memory back to the system to use for caching the file system can be good, lots of shared memory can also be good. Building indexes takes time and IO bandwidth, maybe you could look at building less of them? I'd be tempted to pull the import script apart into its constituent parts, i.e. the initial data load, and then all the constraints/index builds separately. Then run through executing them by hand and see what you can change to make things more efficient. Sam
On Thu, 2008-10-30 at 13:08 -0700, Alan Hodgson wrote: > On Thursday 30 October 2008, Joao Ferreira gmail > > During restore: > > # vmstat > > procs --------memory------- ---swap-- -----io---- -system-- ----cpu---- > > r b swpd free buff cache si so bi bo in cs us sy id wa > > 3 1 230204 4972 1352 110128 2 1 17 63 24 56 12 2 85 0 > > # > > Does that machine really have only 256MB of RAM? And it's over 200MB into > swap? well..... see for yourself... (360 RAM , 524 SWAP) that's what it is... it supposed to be somewhat an embedded product... # # cat /proc/meminfo MemTotal: 360392 kB MemFree: 59548 kB Buffers: 7392 kB Cached: 62640 kB SwapCached: 44724 kB Active: 247892 kB Inactive: 29936 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 360392 kB LowFree: 59548 kB SwapTotal: 524280 kB SwapFree: 292532 kB Dirty: 744 kB Writeback: 0 kB AnonPages: 190344 kB Mapped: 42772 kB Slab: 13176 kB SReclaimable: 5116 kB SUnreclaim: 8060 kB PageTables: 2728 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 704476 kB Committed_AS: 1035156 kB VmallocTotal: 524280 kB VmallocUsed: 4020 kB VmallocChunk: 520164 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB # # cat /proc/cpuinfo processor : 0 vendor_id : CentaurHauls cpu family : 6 model : 9 model name : VIA Nehemiah stepping : 8 cpu MHz : 1002.309 cache size : 64 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr cx8 sep mtrr pge cmov pat mmx fxsr sse up rng rng_en ace ace_en bogomips : 2009.04 clflush size : 32 > > > -- > Alan >
On Friday 31 October 2008 08:07:08 Sam Mason wrote: > On Thu, Oct 30, 2008 at 02:28:38PM -0700, Alan Hodgson wrote: > > On Thursday 30 October 2008, Joao Ferreira > > <jmcferreira@critical-links.com> > > > > wrote: > > > well..... see for yourself... (360 RAM , 524 SWAP) that's what it is... > > > it supposed to be somewhat an embedded product... > > > > Clearly your hardware is your speed limitation. If you're swapping at > > all, anything running on the machine is going to be slow. > > The vmstat output only showed the odd block going in and out; but > performance is only really going to suffer when it's thrashing. If the > swap in number stays in the double digits for a reasonable amount of > time then you should probably look at what's causing it. Giving memory > back to the system to use for caching the file system can be good, lots > of shared memory can also be good. > well, i think he needs to cut back on the work mem, but i think he might want to give a little more to wal buffers. > Building indexes takes time and IO bandwidth, maybe you could look at > building less of them? I'd be tempted to pull the import script apart > into its constituent parts, i.e. the initial data load, and then all the > constraints/index builds separately. Then run through executing them by > hand and see what you can change to make things more efficient. > It would be good to know where and when his bottlenecks are... ie. i could see him being i/o, memory, or cpu bottlenecked depending on where he is in the restore process. -- Robert Treat Conjecture: http://www.xzilla.net Consulting: http://www.omniti.com