Re: [HACKERS] Problems with >2GB tables on Linux 2.0 - Mailing list pgsql-hackers

From Cary O'Brien
Subject Re: [HACKERS] Problems with >2GB tables on Linux 2.0
Date
Msg-id 199902081315.IAA10470@saltmine.radix.net
Whole thread Raw
List pgsql-hackers
D'Arcy wrote
> 
> Thus spake Tom Lane
> > I'd suggest setting the limit a good deal less than 2Gb to avoid any
> > risk of arithmetic overflow.  Maybe 200000 8K blocks, instead of 262144.
> 
> Why not make it substantially lower by default?  Makes it easier to split
> a database across spindles.  Even better, how about putting extra extents
> into different directories like data/base.1, data/base.2, etc?  Then as
> the database grows you can add drives, move the extents into them and
> mount the new drives.  The software doesn't even notice the change.
> 
> Just a thought.
> 

A good one.  Could be extended to large objects, too.  One of my reasons
for not using large objects is that they all end up in the same directory
(with all the other data files).  Things work much better if the number
of files in a directory is kept to a 3 digit value.  Plus depending
on how the subdirectories are assigned it makes it easier to split
across drives.  Hashing the oid to an 8 bit value might be a start.

With data tables and indexes it would still be nice to retain the
human-understandable names. 

-- cary


pgsql-hackers by date:

Previous
From: Dmitry Samersoff
Date:
Subject: libpq questuion
Next
From: "Cary O'Brien"
Date:
Subject: Version numbering