Re: [HACKERS] sort on huge table - Mailing list pgsql-hackers

From Tatsuo Ishii
Subject Re: [HACKERS] sort on huge table
Date
Msg-id 199910180608.PAA01008@srapc451.sra.co.jp
Whole thread Raw
In response to Re: [HACKERS] sort on huge table  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: [HACKERS] sort on huge table  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
>Tatsuo Ishii <t-ishii@sra.co.jp> writes:
>> I have done the 2GB test on current (with your fixes). This time the
>> sorting query worked great! I saw lots of temp files, but the total
>> disk usage was almost same as before (~10GB). So I assume this is ok.
>
>I have now committed another round of changes that reduce the temp file
>size to roughly the volume of data to be sorted.  It also reduces the
>number of temp files --- there will be only one per GB of sort data.
>If you could try sorting a table larger than 4GB with this code, I'd be
>much obliged.  (It *should* work, of course, but I just want to be sure
>there are no places that will have integer overflows when the logical
>file size exceeds 4GB.)  I'd also be interested in how the speed
>compares to the old code on a large table.
>
>Still need to look at the memory-consumption issue ... and CREATE INDEX
>hasn't been taught about any of these fixes yet.

I tested with a 1GB+ table (has a segment file) and a 4GB+ table (has
four segment files) and got same error message:

ERROR:  ltsWriteBlock: failed to write block 131072 of temporary file               Perhaps out of disk space?

Of course disk space is enough, and no physical errors were
reported. Seems the error is raised when the temp file hits 1GB?
--
Tatsuo Ishii



pgsql-hackers by date:

Previous
From: "Hiroshi Inoue"
Date:
Subject: RE: [HACKERS] mdnblocks is an amazing time sink in huge relations
Next
From: Oleg Bartunov
Date:
Subject: Re: [HACKERS] is it possible to use LIMIT and INTERSECT ?