Thread: Re: [HACKERS] Re: [QUESTIONS] Business cases

Re: [HACKERS] Re: [QUESTIONS] Business cases

From
darrenk@insightdist.com (Darren King)
Date:
> > >   Also, how are people handling tables with lots of rows?  The 8k tuple
> > > size can waste a lot of space.  I need to be able to handle a 2 million
> > > row table, which will eat up 16GB, plus more for indexes.
> >
> >     This oen is improved upon in v6.3, where at compile time you can stipulate
> > the tuple size.  We are looking into making this an 'initdb' option instead,
> > so that you can have the same binary for multiple "servers", but any database
> > created under a particular server will be constrained by that tuple size.
>
>   That might help a bit, but same tables may have big rows and some not.
> For example, my 2 million row table requires only requires two date
> fields, and 7 integer fields.  That isn't very much data.  However, I'd
> like to be able to join against another table with much larger rows.

Two dates and 7 integers would make tuple of 90-some bytes, call it 100 max.
So you would prolly get 80 tuples per 8k page, so 25000 pages would use a
file of 200 meg.

The block size parameter will be database-specific, not table-specific, and
since you can't join tables from different _databases_, 2nd issue is moot.

If I could get around to the tablespace concept again, then maybe a different
block size per tablespace would be useful.  But, that is putting the cart
a couple of light-years ahead of the proverbial horse...

Darren  aka  darrenk@insightdist.com