tfinneid@student.matnat.uio.no wrote:
> > tfinneid@student.matnat.uio.no wrote:
> >
> >> > are a dump of Postgres's current memory allocations and could be
> >> useful in
> >> > showing if there's a memory leak causing this.
> >>
> >> The file is 20M, these are the last lines: (the first line continues
> >> unttill ff_26000)
> >>
> >>
> >> idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free (0
> >> chunks); 632 used
> >
> > You have 26000 partitions???
>
> At the moment the db has 55000 partitions, and thats only a fifth of the
> complete volume the system will have in production. The reason I chose
> this solution is that a partition will be loaded with new data every 3-30
> seconds, and all that will be read by up to 15 readers every time new data
> is available. The data will be approx 2-4TB in production in total. So it
> will be too slow if I put it in a single table with permanent indexes.
>
> I did a test previously, where I created 1 million partitions (without
> data) and I checked the limits of pg, so I think it should be ok.
Clearly it's not. The difference could be the memory usage and wastage
for all those relcache entries and other stuff. I would reduce the
number of partitions to a more reasonable value (within the tens, most
likely)
Maybe your particular problem can be solved by raising
max_locks_per_transaction (?) but I wouldn't count on it.
--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match