Re: Trading off large objects (arrays, large strings, large tables) for timeseries - Mailing list pgsql-general

From Antonios Christofides
Subject Re: Trading off large objects (arrays, large strings, large tables) for timeseries
Date
Msg-id 20050216090400.GA3131@itia.ntua.gr
Whole thread Raw
In response to Re: Trading off large objects (arrays, large strings, large tables) for timeseries  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Trading off large objects (arrays, large strings, large tables) for timeseries
List pgsql-general
Tom Lane wrote:
> Antonios Christofides <anthony@itia.ntua.gr> writes:
> >     Why 25 seconds for appending an element?
>
> Would you give us a specific test case, rather than a vague description
> of what you're doing?

OK, sorry, here it is (on another machine, thus times are different.
8.0.1 on a PIV 1.6GHz 512 MB RAM, Debian woody, kernel 2.4.18):

CREATE TABLE test(id integer not null primary key, records text[]);

INSERT INTO test(id, records) VALUES (1,
'{"1993-09-30 13:20,182,",
"1993-09-30 13:30,208,",
"1993-09-30 13:51,203,",
[snipping around 2 million rows]
"2057-02-13 02:31,155,",
"2099-12-08 10:39,198,"}');

[Took 60 seconds]

SELECT array_dims(records) FROM test;
 array_dims
-------------
 [1:2000006]
(1 row)

UPDATE test SET records[2000007] = 'hello, world!';

[11 seconds]

UPDATE test SET records[1000000] = 'hello, world!';

[15 seconds (but the difference may be because of system load - I
don't have a completely idle machine available right now)]

I thought the two above UPDATE commands would be instant.

pgsql-general by date:

Previous
From: "Ed L."
Date:
Subject: hung postmaster?
Next
From: Pritesh Shah
Date:
Subject: postgresql8.0 and postgis1.0.0