Re: [HACKERS] Problems with >2GB tables on Linux 2.0 - Mailing list pgsql-hackers
From | Peter T Mount |
---|---|
Subject | Re: [HACKERS] Problems with >2GB tables on Linux 2.0 |
Date | |
Msg-id | Pine.LNX.4.04.9902071440120.553-100000@maidast.retep.org.uk Whole thread Raw |
In response to | Re: [HACKERS] Problems with >2GB tables on Linux 2.0 (Peter T Mount <peter@retep.org.uk>) |
Responses |
Re: [HACKERS] Problems with >2GB tables on Linux 2.0
|
List | pgsql-hackers |
On Sun, 7 Feb 1999, Peter T Mount wrote: > On Sat, 6 Feb 1999, Hannu Krosing wrote: > > > Thomas Reinke wrote: > > > > > > I may be dating myself really badly here, but isn't there a hard limit > > > on > > > the file system at 2Gig? I thought the file size attribute in Unix is > > > represented as a 32 bit signed long, which happens to be a max value > > > of 2147483648. If I'm right, it means the problem is fundamentally > > > with the file system, not with PostGres, and you won't solve this > > > unless the os supports larger files. > > > > There is logic insid PostgreSQL to overflof to nex file at 2GB, but > > apparently this is currently broken. > > > > AFAIK, there are people working on it now > > Yes, me ;-) > > I have an idea where the failure is occuring, but I'm still testing the > relavent parts of the code. Well, just now I think I know what's going on. First, I've reduced the size that postgres breaks the file to 2Mb (256 blocks). I then ran the test script that imports some large records into a test table. As expected, the splitting of the file works fine. So the code isn't broken. What I think is happening is that the code extends the table, then tests to see if it's at the 2Gig limit, and when it is, creates the next file for that table. However, I think the OS has problems with a file exactly 2Gb in size. I've attached a patch that should reduce the max table size by 1 block. This should prevent us from hitting the physical limit. Note: I haven't tested this patch yet! It compiles but, because the test takes 4 hours for my machine to reach 2Gb, and I have a few other things to do today, I'll run it overnight. Hopefully, first thing tomorrow, we'll know if it works. Peter -- Peter T Mount peter@retep.org.uk Main Homepage: http://www.retep.org.uk PostgreSQL JDBC Faq: http://www.retep.org.uk/postgresJava PDF Generator: http://www.retep.org.uk/pdf *** ./backend/storage/smgr/md.c.orig Mon Feb 1 17:55:57 1999 --- ./backend/storage/smgr/md.c Sun Feb 7 14:48:35 1999 *************** *** 77,86 **** * * 19 Mar 98 darrenk * */ #ifndef LET_OS_MANAGE_FILESIZE ! #define RELSEG_SIZE ((8388608 / BLCKSZ) * 256) #endif /* routines declared here */ --- 77,91 ---- * * 19 Mar 98 darrenk * + * After testing, we need to add one less block to the file, otherwise + * we extend beyond the 2-gig limit. + * + * 07 Feb 99 Peter Mount + * */ #ifndef LET_OS_MANAGE_FILESIZE ! #define RELSEG_SIZE (((8388608 / BLCKSZ) * 256)-BLCKSZ) #endif /* routines declared here */
pgsql-hackers by date: