Re: Trading off large objects (arrays, large strings, large tables) for timeseries - Mailing list pgsql-general

From Antonios Christofides
Subject Re: Trading off large objects (arrays, large strings, large tables) for timeseries
Date
Msg-id 20050216092449.GA3165@itia.ntua.gr
Whole thread Raw
In response to Re: Trading off large objects (arrays, large strings, large tables) for timeseries  (Shridhar Daithankar <ghodechhap@ghodechhap.net>)
List pgsql-general
Shridhar Daithankar wrote:
> Perhaps you could attempt to store a fix small number of records per row, say
> 4-6? Or may be a smaller fixed size array, That should make the row overhead
> less intrusive...

Thanks, I didn't like your idea, but it helped me come up with another
idea:

    (timeseries_id integer, top text, middle text, bottom text);

The entire timeseries is the concatenation of 'top' (a few records),
'middle' (millions of records), and 'bottom' (a few records). To get
the last record, or to append a record, you only read/write 'bottom',
which is very fast. Whenever the entire timeseries is written (a less
frequent operation), the division into these three parts will be
redone, thus keeping 'bottom' small.

pgsql-general by date:

Previous
From: Pritesh Shah
Date:
Subject: postgresql8.0 and postgis1.0.0
Next
From: Oleg Bartunov
Date:
Subject: Re: postgresql8.0 and postgis1.0.0