Hi Stephen,
Each query is running in a separate transaction.
Why does portioning is done better rather than using partial index?
Thanks,
Lior
-----Original Message-----
From: Stephen Frost [mailto:sfrost@snowman.net]
Sent: Monday, May 27, 2013 16:15
To: Ben Zeev, Lior
Cc: Atri Sharma; Pg Hackers
Subject: Re: [HACKERS] PostgreSQL Process memory architecture
Lior,
* Ben Zeev, Lior (lior.ben-zeev@hp.com) wrote:
> Yes, The memory utilization per PostgreSQL backend process is when
> running queries against this tables, For example: select * from test where num=2 and c2='abc'
> When It start it doesn't consume to much memory, But as it execute
> against more and more indexes the memory consumption grows
Are these all running in one transaction, or is this usage growth across multiple transactions? If this is all in the
sametransaction, what happens when you do these queries in independent transactions?
> This tables should contain data, But I truncate the data of the tables
> because I wanted to make sure that the memory consumption is not
> relate to the data inside the table, but rather to the structure of
> the tables
If you actually have sufficient data to make having 500 indexes on a table sensible, it strikes me that this memory
utilizationmay not be the biggest issue you run into. If you're looking for partitioning, that's much better done, in
PGat least, by using inheiritance and constraint exclusion.
Thanks,
Stephen