On Sun, 7 Feb 1999 gjerde@icebox.org wrote:
> On Sun, 7 Feb 1999, Peter T Mount wrote:
> > Anyhow, I'm about to start the test, using RELSEG_SIZE set to 243968 which
> > works out to be 1.6Gb. That should stay well away from the overflow
> > problem.
>
> Hi,
> I just did a checkout of the cvs code, hardcoded RELSEG_SIZE to 243968,
> and it works beautifully now!
Problem here is that RELSEG_SIZE is dependent on the block size. Seeing we
can increase the block size from 8k, this would break.
As I type, my machine is populating the test table.
> I imported about 2.2GB of data(table file size) and it looks like this:
> -rw------- 1 postgres postgres 1998585856 Feb 7 16:22 mcrl3_1
> -rw------- 1 postgres postgres 219611136 Feb 7 16:49 mcrl3_1.1
> -rw------- 1 postgres postgres 399368192 Feb 7 16:49
> mcrl3_1_partnumber_index
>
> And it works fine.. I did some selects on data that should have ended up
> in the .1 file, and it works great. The best thing about it, is that it
> seems at least as fast as MSSQL on the same data, if not faster..
This is what I got when I tested it using a reduced file size. It's what
made me decide to reduce the size by 1 in the patch I posted earlier.
However, I'm using John's suggestion of reducing the file size a lot more,
to ensure we don't hit any math errors, etc. So the max file size is about
1.6Gb.
> It did take like 45 minutes to create that index.. Isn't that a bit
> long(AMD K6-2 350MHz)? :)
Well, it's taking my poor old P133 about 2 hours to hit 2Gb at the moment.
> Suggestion: How hard would it be to make copy tablename FROM 'somefile'
> give some feedback? Either some kind of percentage or just print out
> something after each 10k row chunks or something like that.
Attached is the test script I'm using, minus the data file.
Peter
-- Peter T Mount peter@retep.org.uk Main Homepage: http://www.retep.org.uk
PostgreSQL JDBC Faq: http://www.retep.org.uk/postgresJava PDF Generator: http://www.retep.org.uk/pdf