> I've been working with some really large tables in Postgres
> ( both 6.2.1 and in preparing to upgrade to 6.3.2 ) running on
> Solaris 2.6 on an Ultra1. When the amount of data in any one
> table reaches 2Gig, the postgres client connection will just hang.
> Any subsequent connections will hang until the first offending
> process is killed, at which point other connections will complete
> successfully. Other queries will work unless one tries to access
> the data that was added at the end of the table, at which point
> it will hang until it is killed again.
>
> I have been able to work around the problem by compiling 6.3.2
> with the Solaris options for using 64bit files. To make the change,
> I added the flags to the sparc_solaris-gcc template file, and changed
> the couple of occurrences of "fseek" in src/backend/utils/sort/psort.c
> to "fseeko" just incase. It compiles, all of the regression tests
> do the expected things, and the "table.1" file gets created when the
> table exceeds the 2Gig mark. Everything appears to be functioning
> fine, and I was wondering if anyone else has had any experiences with
> similar situations.
>
Some people have complained that the over 2-gig stuff is broken. Seems
like it is partially broken. Shouldn't you be able to run fine without
the 64-bit files option. Can you submit a patch that would work without
the Solaris hack? The code was supposed to fix the need for 64-bit
files. If you have 64-bit files, there is no reason to start a
"table.1".
Either we need to get the "table.1" thing working without 64-bit files,
or we just delete all the "table.1" stuff and let the OS handle it.
--
Bruce Momjian | 830 Blythe Avenue
maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026
+ If your life is a hard drive, | (610) 353-9879(w)
+ Christ can be your backup. | (610) 853-3000(h)