"D'Arcy J.M. Cain" wrote:
>
> Thus spake Tom Lane
> > I'd suggest setting the limit a good deal less than 2Gb to avoid any
> > risk of arithmetic overflow. Maybe 200000 8K blocks, instead of 262144.
>
> Why not make it substantially lower by default? Makes it easier to split
> a database across spindles. Even better, how about putting extra extents
> into different directories like data/base.1, data/base.2, etc? Then as
> the database grows you can add drives, move the extents into them and
> mount the new drives. The software doesn't even notice the change.
It would be also a great way to help optimization if indexes were in
a separate directory from the tables.
And of course our current way of keeping all the large object files in
one
directory (even _the same_ with other data) sucks.
It has kept me away from using large objects at all, as I've heard that
Linux (or rather ext2fs) is not very good at dealing with huge
directories.
An I have no use for only a few large objects ;)
There have been suggestions about splitting up the large object storage
by
the hex representation of the oid value (= part of current filename),
but a good start would be to put them just in a separate directory under
pg_data. The temp files are also good candidates for putting in a
separate
directory.
The next step would be of course dataspaces, probably most easyly
implemented as directories:
CREATE DATASPACE PG_DATA1 STORAGE='/mnt/scsi.105.7/data1';
SET DEFAULT_DATASPACE TO PG_DATA1;
CREATE TABLE ... IN DATASPACE PG_DATA;
CREATE INDEX ... ;
Then we would'nt have to move and symlink them tables and indexes
manually.
--------------
Hannu