Re: select count() out of memory - Mailing list pgsql-general

From tfinneid@student.matnat.uio.no
Subject Re: select count() out of memory
Date
Msg-id 42923.134.32.140.234.1193315284.squirrel@webmail.uio.no
Whole thread Raw
In response to Re: select count() out of memory  (Alvaro Herrera <alvherre@commandprompt.com>)
Responses Re: select count() out of memory  (Alvaro Herrera <alvherre@commandprompt.com>)
Re: select count() out of memory  (Alvaro Herrera <alvherre@commandprompt.com>)
List pgsql-general
> tfinneid@student.matnat.uio.no wrote:
>
>> > are a dump of Postgres's current memory allocations and could be
>> useful in
>> > showing if there's a memory leak causing this.
>>
>> The file is 20M, these are the last lines: (the first line continues
>> unttill ff_26000)
>>
>>
>> idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free (0
>> chunks); 632 used
>
> You have 26000 partitions???

At the moment the db has 55000 partitions, and thats only a fifth of the
complete volume the system will have in production. The reason I chose
this solution is that a partition will be loaded with new data every 3-30
seconds, and all that will be read by up to 15 readers every time new data
is available. The data will be approx 2-4TB in production in total. So it
will be too slow if I put it in a single table with permanent indexes.

I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.

thomas


pgsql-general by date:

Previous
From: tfinneid@student.matnat.uio.no
Date:
Subject: Re: select count() out of memory
Next
From: Evandro Andersen
Date:
Subject: Delete/Update with ORDER BY