Thread: Performance of archive logging in a PITR restore

From:
"Mark Steben"
Date:

First of all, I did pose this question first on the pgsql – admin mailing list.

And I know it is not appreciated to post across multiple mailing lists so I

Apologize in advance.  I do not make it a practice to do so but, this being

A performance issue I think I should have inquired on this list first.  Rest

Assured I won’t double post again. 

 

The issue is that during a restore on a remote site, (Postgres 8.2.5)

archived logs are taking an average of 35 – 40 seconds apiece to restore. 

This is roughly the same speed that they are being archived on the production

Site. I compress the logs when I copy them over, then uncompress them

During the restore using a cat | gzip –dc command.  I don’t think

The bottleneck is in that command – a log typically is uncompressed and copied

In less than 2 seconds when I do this manually.  Also when I pass a log

That is already uncompressed the performance improves by only about 10 percent.

 

A log compresses (using) gzip down to between 5.5 and 6.5 MB.

 I have attempted Increases in shared_buffers (250MB to 1500MB).

  Other relevant (I think) config parameters include:

       Maintenance_work_mem (300MB)

       Work_mem (75MB)

       Wal_buffers (48)  

       Checkpoint_segments (32)

       Autovacuum (off)

 

     

ipcs -l

 

------ Shared Memory Limits --------

max number of segments = 4096

max seg size (kbytes) = 4194303

max total shared memory (kbytes) = 1073741824

min seg size (bytes) = 1

 

------ Semaphore Limits --------

max number of arrays = 128

max semaphores per array = 250

max semaphores system wide = 32000

max ops per semop call = 32

semaphore max value = 32767

 

------ Messages: Limits --------

max queues system wide = 16

max size of message (bytes) = 65536

default max size of queue (bytes) = 65536

 

Our database size is about 130 GB.  We use tar

To backup the file structure. Takes roughly about

An hour to xtract the tarball before PITR log recovery

Begins.  The tarball itself 31GB compressed.

 

Again I apologize for the annoying double posting but

I am pretty much out of ideas to make this work.

 

 

 

Mark StebenDatabase Administrator

@utoRevenue­®­ "Join the Revenue-tion"
95 Ashley Ave. West Springfield, MA., 01089 
413-243-4800 x1512 (Phone) 
│ 413-732-1824 (Fax)

@utoRevenue is a registered trademark and a division of Dominion Enterprises

 

From:
"Joshua D. Drake"
Date:

On Mon, 2009-03-16 at 12:11 -0400, Mark Steben wrote:
> First of all, I did pose this question first on the pgsql – admin
> mailing list.


> The issue is that during a restore on a remote site, (Postgres 8.2.5)
>
> archived logs are taking an average of 35 – 40 seconds apiece to
> restore.

Archive logs are restored in a serialized manner so they will be slower
to restore in general.

Joshua D. Drake



--
PostgreSQL - XMPP: 
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


From:
Heikki Linnakangas
Date:

Joshua D. Drake wrote:
> On Mon, 2009-03-16 at 12:11 -0400, Mark Steben wrote:
>> The issue is that during a restore on a remote site, (Postgres 8.2.5)

8.2.5 is quite old. You should upgrade to the latest 8.2.X release.

>> archived logs are taking an average of 35 – 40 seconds apiece to
>> restore.
>
> Archive logs are restored in a serialized manner so they will be slower
> to restore in general.

Yeah, if you have several concurrent processes on the primary doing I/O
and generating log, at restore the I/O will be serialized.

Version 8.3 is significantly better with this (as long as you don't
disable full_page_writes). In earlier versions, each page referenced in
the WAL was read from the filesystem, only to be replaced with the full
page image. In 8.3, we skip the read and just write over the page image.
Depending on your application, that can make a very dramatic difference
to restore time.

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com