Re: dump of 700 GB database - Mailing list pgsql-general

From karsten vennemann
Subject Re: dump of 700 GB database
Date
Msg-id E27976924EA445D3A387FEC2C72D7D8C@snuggie
Whole thread Raw
In response to Re: dump of 700 GB database  (Scott Marlowe <scott.marlowe@gmail.com>)
List pgsql-general
> Note that cluster on a randomly ordered large table can be
> prohibitively slow, and it might be better to schedule a
> short downtime to do the following (pseudo code)
> alter table tablename rename to old_tablename; create table
> tablename like old_tablename; insert into tablename select *
> from old_tablename order by clustered_col1, clustered_col2;

That sounds like a great idea if that saves time.

>> (creating and moving over FK references as needed.)
>> shared_buffers=160MB, effective_cache_size=1GB,
>> maintenance_work_mem=500MB, wal_buffers=16MB,
>> checkpoint_segments=100

> What's work_mem set to?
work_mem = 32MB

> What ubuntu?  64 or 32 bit?
It’s a 32 bit. I don’t know if 4GB files doesn't sound to small of a dump
for originally 350GB big db - nor why pg_restore fails...

> Have you got either a file
> system or a set of pg tools limited to 4Gig file size?
Not sure what is the problem on my server - I'm trying to figure out what
has pg_restore fail...


pgsql-general by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: dump of 700 GB database
Next
From: Greg Smith
Date:
Subject: Re: tuning bgwriter in 8.4.2