Thread: Copy database performance issue

Copy database performance issue

From
Steve
Date:
Hello there;

I've got an application that has to copy an existing database to a new
database on the same machine.

I used to do this with a pg_dump command piped to psql to perform the
copy; however the database is 18 gigs large on disk and this takes a LONG
time to do.

So I read up, found some things in this list's archives, and learned that
I can use createdb --template=old_database_name to do the copy in a much
faster way since people are not accessing the database while this copy
happens.


The problem is, it's still too slow.  My question is, is there any way I
can use 'cp' or something similar to copy the data, and THEN after that's
done modify the database system files/system tables to recognize the
copied database?

For what it's worth, I've got fsync turned off, and I've read every tuning
thing out there and my settings there are probably pretty good.  It's a
Solaris 10 machine (V440, 2 processor, 4 Ultra320 drives, 8 gig ram) and
here's some stats:

shared_buffers = 300000
work_mem = 102400
maintenance_work_mem = 1024000

bgwriter_lru_maxpages=0
bgwriter_lru_percent=0

fsync = off
wal_buffers = 128
checkpoint_segments = 64


Thank you!


Steve Conley

Re: Copy database performance issue

From
Jignesh Shah
Date:
Steve,

Are you using the latest update release of Solaris 10 ?

When you are doing the copy, did you check with prstat -amL to see if it
is saturating on any CPU?

If it is saturating on a CPU then atleast it will narrow down that you
need to improve the CPU utilization of the copy process.

Brendan Greg's "hotuser" script which uses DTrace and Pearl post
processing will help you to figure out which functions is causing the
high CPU utilization and then maybe somebody from the PostgreSQL team
can figure out what's happening that is causing the slow copy.

If none of the cores show up as near 100% then the next step is to
figure out if any disk is 100% utilized via iostat -xczmP .

With this information it might help to figure out the next steps in your
case.

Regards,
Jignesh


Steve wrote:
> Hello there;
>
> I've got an application that has to copy an existing database to a new
> database on the same machine.
>
> I used to do this with a pg_dump command piped to psql to perform the
> copy; however the database is 18 gigs large on disk and this takes a
> LONG time to do.
>
> So I read up, found some things in this list's archives, and learned
> that I can use createdb --template=old_database_name to do the copy in
> a much faster way since people are not accessing the database while
> this copy happens.
>
>
> The problem is, it's still too slow.  My question is, is there any way
> I can use 'cp' or something similar to copy the data, and THEN after
> that's done modify the database system files/system tables to
> recognize the copied database?
>
> For what it's worth, I've got fsync turned off, and I've read every
> tuning thing out there and my settings there are probably pretty
> good.  It's a Solaris 10 machine (V440, 2 processor, 4 Ultra320
> drives, 8 gig ram) and here's some stats:
>
> shared_buffers = 300000
> work_mem = 102400
> maintenance_work_mem = 1024000
>
> bgwriter_lru_maxpages=0
> bgwriter_lru_percent=0
>
> fsync = off
> wal_buffers = 128
> checkpoint_segments = 64
>
>
> Thank you!
>
>
> Steve Conley
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend

Re: Copy database performance issue

From
"Jim C. Nasby"
Date:
On Mon, Oct 23, 2006 at 05:51:40PM -0400, Steve wrote:
> Hello there;
>
> I've got an application that has to copy an existing database to a new
> database on the same machine.
>
> I used to do this with a pg_dump command piped to psql to perform the
> copy; however the database is 18 gigs large on disk and this takes a LONG
> time to do.
>
> So I read up, found some things in this list's archives, and learned that
> I can use createdb --template=old_database_name to do the copy in a much
> faster way since people are not accessing the database while this copy
> happens.
>
>
> The problem is, it's still too slow.  My question is, is there any way I
> can use 'cp' or something similar to copy the data, and THEN after that's
> done modify the database system files/system tables to recognize the
> copied database?

AFAIK, that's what initdb already does... it copies the database,
essentially doing what cp does.

> For what it's worth, I've got fsync turned off, and I've read every tuning
> thing out there and my settings there are probably pretty good.  It's a
> Solaris 10 machine (V440, 2 processor, 4 Ultra320 drives, 8 gig ram) and
> here's some stats:

I don't think any of the postgresql.conf settings will really come into
play when you're doing this...
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)