Thread: ERROR: out of memory DETAIL: Failed on request of size ???
We'd like to seek out your expertise on postgresql regarding this error that we're getting in an analytical database.
Some specs:proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
memory: 48GB
OS: Oracle Enterprise Linux 6.3
create table test as
select col1, max(col2), max(col3), max(col4), max(col5)The view queries another view that does a lot of UNION ALLs of tables. This particular case I'm dealing with about 60 tables unioned together.
The resulting error is:
ERROR: out of memory
DETAIL: Failed on request of size ???
DETAIL: Failed on request of size ???
Brian
On Mon, Nov 18, 2013 at 12:40 PM, Brian Wong <bwong64@hotmail.com> wrote:
After doing a lot of googling, I've tried setting FETCH_COUNT on psql AND/OR setting work_mem. I'm just not able to work around this issue, unless if I take most of the MAX() functions out but just one.shared_buffers: 18GBpostgresql version: 9.1.9We'd like to seek out your expertise on postgresql regarding this error that we're getting in an analytical database.Some specs:
proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
memory: 48GB
OS: Oracle Enterprise Linux 6.3
What is your work_mem set to?
Did testing show that shared_buffers set to 18GB was effective? That seems about 2 to 3 times beyond what you probably want.
On Mon, Nov 18, 2013 at 8:30 PM, Brian Wong <bwong64@hotmail.com> wrote:
I've tried any work_mem value from 1gb all the way up to 40gb, with no effect on the error. I'd like to think of this problem as a server process memory (not the server's buffers) or client process memory issue, primarily because when we tested the error there was no other load whatsoever. Unfortunately, the error doesn't say what kinda memory ran out.
I wasn't asking because I thought you should make it higher, I think you should make it lower. Set it to 200MB and reload your conf files ("select pg_reload_conf()") and try your queries again. work_mem is a per step setting, used by aggregates and sort steps, potentially multiple times in a single query, also multiplied by any other concurrent queries. In this case, it might not be the cause, but certainly try a lower setting to rule it out.
Hello
I reported similar problem week ago - Postgres releases work_mem (assigned for every SELECT in union) after query finishing. So large SELECT UNION ALL SELECT UNION ALL .. queries require lot of memory. My customer reported significant problems for 100 unions. He had to migrate to 64bit pg with large swap or (first, but probably not good solution) significantly decreases work_mem. Probably better solution is using temporary table in this moment.2013/11/19 bricklen <bricklen@gmail.com>
On Mon, Nov 18, 2013 at 8:30 PM, Brian Wong <bwong64@hotmail.com> wrote:I've tried any work_mem value from 1gb all the way up to 40gb, with no effect on the error. I'd like to think of this problem as a server process memory (not the server's buffers) or client process memory issue, primarily because when we tested the error there was no other load whatsoever. Unfortunately, the error doesn't say what kinda memory ran out.I wasn't asking because I thought you should make it higher, I think you should make it lower. Set it to 200MB and reload your conf files ("select pg_reload_conf()") and try your queries again. work_mem is a per step setting, used by aggregates and sort steps, potentially multiple times in a single query, also multiplied by any other concurrent queries. In this case, it might not be the cause, but certainly try a lower setting to rule it out.