Re: Trading off large objects (arrays, large strings, large tables) for timeseries - Mailing list pgsql-general

From Tom Lane
Subject Re: Trading off large objects (arrays, large strings, large tables) for timeseries
Date
Msg-id 12079.1108479382@sss.pgh.pa.us
Whole thread Raw
In response to Trading off large objects (arrays, large strings, large tables) for timeseries  (Antonios Christofides <anthony@itia.ntua.gr>)
Responses Re: Trading off large objects (arrays, large strings, large tables) for timeseries
List pgsql-general
Antonios Christofides <anthony@itia.ntua.gr> writes:
>     Why 25 seconds for appending an element?

Would you give us a specific test case, rather than a vague description
of what you're doing?

> (2) I also tried using a large (80M) text instead (i.e. instead of
>     storing an array of lines, I store a huge plain text file). What
>     surprised me is that I can get the 'tail' of the file (using
>     substring) in only around one second, although it is transparently
>     compressed (to 17M). It doesn't decompress the entire string, does
>     it? Does it store it somehow chunked?

http://www.postgresql.org/docs/8.0/static/storage-toast.html

> What I'm trying to do is find a good way to store timeseries. A
> timeseries is essentially a series of (date, value) pairs, and more
> specifically it is an array of records, each record consisting of
> three items: date TIMESTAMP, value DOUBLE PRECISION, flags TEXT.

In practically every case, the answer is to use a table with rows
of that form.  SQL just isn't designed to make it easy to do something
else.

            regards, tom lane

pgsql-general by date:

Previous
From: Peter Eisentraut
Date:
Subject: Re: database encoding "WIN" -- Western or Cyrillic?
Next
From: Shridhar Daithankar
Date:
Subject: Re: Trading off large objects (arrays, large strings, large tables) for timeseries