Thread: query memory consumption
Hey folks, We are looking to optimize the query I was talking about last week which is killing our system. We have explain and analyze which tell us about the cost of a query time-wise, but what does one use to determine (and trace / predict?) memory consumption? thanks, -Alan -- “Don't eat anything you've ever seen advertised on TV” - Michael Pollan, author of "In Defense of Food"
On Mon, Sep 21, 2009 at 10:47 AM, Alan McKay <alan.mckay@gmail.com> wrote: > We are looking to optimize the query I was talking about last week > which is killing our system. > > We have explain and analyze which tell us about the cost of a query > time-wise, but what does one use to determine (and trace / predict?) > memory consumption? I'm not sure what to suggest, other than the available operating system tools, but if you post EXPLAIN ANALYZE output we might be able to speculate better. Setting work_mem too high is a frequent cause of problems of this sort, I think. ...Robert
On Mon, 21 Sep 2009, Alan McKay wrote: > We have explain and analyze which tell us about the cost of a query > time-wise, but what does one use to determine (and trace / predict?) > memory consumption? In Postgres, memory consumption for all operations is generally capped at the value of work_mem. However, a given query can consist of more than one operation. Generally, only heavy things like sorts and hashes consume work_mem, so it should be possible to look at the explain to count those, multiply by work_mem, and get the maximum amount of RAM that the query can use. However, sometimes a query will not fit neatly into work_mem. At this point, Postgres will write the data to temporary files on disc. It is harder to predict what size those will be. However, EXPLAIN ANALYSE will sometimes give you a figure of how big a sort was for example. Matthew -- Reality is that which, when you stop believing in it, doesn't go away. -- Philip K. Dick
On Mon, Sep 21, 2009 at 4:08 PM, Robert Haas <robertmhaas@gmail.com> wrote: > Setting work_mem too high is a frequent cause of problems of this sort, I think. Too high? How high is too high? -- “Don't eat anything you've ever seen advertised on TV” - Michael Pollan, author of "In Defense of Food"
On Tue, Sep 22, 2009 at 1:36 PM, Alan McKay <alan.mckay@gmail.com> wrote: > Too high? How high is too high? in a very simple scenario, you have 100 connections opened, and all of them run the query that was the reason you bumped work_mem to 256M. All of the sudden postgresql starts to complain about lack of ram, because you told it it could use max of work_mem*number_of_connections. Best practice to avoid that, is to bump the work_mem temporarily before the query, and than lower it again, lowers the chance of memory exhaustion. -- GJ
> Best practice to avoid that, is to bump the work_mem temporarily > before the query, and than lower it again, lowers the chance of memory > exhaustion. Interesting - I can do that dynamically? -- “Don't eat anything you've ever seen advertised on TV” - Michael Pollan, author of "In Defense of Food"
On Tue, Sep 22, 2009 at 1:46 PM, Alan McKay <alan.mckay@gmail.com> wrote: >> Best practice to avoid that, is to bump the work_mem temporarily >> before the query, and than lower it again, lowers the chance of memory >> exhaustion. > > Interesting - I can do that dynamically? you can do set work_mem=128M; select 1; set work_mem=64M; etc, in one query. -- GJ
2009/9/22 Grzegorz Jaśkiewicz <gryzman@gmail.com>: > On Tue, Sep 22, 2009 at 1:46 PM, Alan McKay <alan.mckay@gmail.com> wrote: >>> Best practice to avoid that, is to bump the work_mem temporarily >>> before the query, and than lower it again, lowers the chance of memory >>> exhaustion. >> >> Interesting - I can do that dynamically? > > you can do set work_mem=128M; select 1; set work_mem=64M; > > etc, in one query. But if all backends are running this one query at the same time, it won't help because they will all bump up their limits at the same time. If they are all running different queries, and just one of them really gets a big benefit from the extra memory, but the rest just use it because they think they have it even though it is only a small benefit, then bumping up just for the query that gets a big improvement could work. Jeff
2009/9/25 Jeff Janes <jeff.janes@gmail.com>: > 2009/9/22 Grzegorz Jaśkiewicz <gryzman@gmail.com>: >> On Tue, Sep 22, 2009 at 1:46 PM, Alan McKay <alan.mckay@gmail.com> wrote: >>>> Best practice to avoid that, is to bump the work_mem temporarily >>>> before the query, and than lower it again, lowers the chance of memory >>>> exhaustion. >>> >>> Interesting - I can do that dynamically? >> >> you can do set work_mem=128M; select 1; set work_mem=64M; >> >> etc, in one query. > > But if all backends are running this one query at the same time, it > won't help because they will all bump up their limits at the same > time. If they are all running different queries, and just one of them > really gets a big benefit from the extra memory, but the rest just use > it because they think they have it even though it is only a small > benefit, then bumping up just for the query that gets a big > improvement could work. This is, I think, a possible area for future optimizer work, but the right design is far from clear. ...Robert