Re: ERROR: out of memory DETAIL: Failed on request of size ??? - Mailing list pgsql-general
From | Brian Wong |
---|---|
Subject | Re: ERROR: out of memory DETAIL: Failed on request of size ??? |
Date | |
Msg-id | BLU171-W66299F9AFA9F4C41542E49BEEF0@phx.gbl Whole thread Raw |
In response to | Re: ERROR: out of memory DETAIL: Failed on request of size ??? ("Tomas Vondra" <tv@fuzzy.cz>) |
List | pgsql-general |
> Date: Fri, 22 Nov 2013 20:11:47 +0100
> * So how much memory does the query allocate? Can you watch it over
> top/free to get an idea if it e.g. allocates all available memory, or if
> it allocates only 1GB and then fail, or something?
>
> * I believe you're hitting some sort of limit, imposed by the operating
> system. Please check ulimit and overcommit settings.
>
> * BTW the SO post you mentioned as a perfect match was talking about query
> executed over dblink - are you doing the same? If so, try to move the
> aggregation into the query (so that the aggregation happens on the other
> end).
>
> regards
> Tomas
>
> >
> > --- Original Message ---
> >
> > From: "bricklen" <bricklen@gmail.com>
> > Sent: November 18, 2013 7:25 PM
> > To: "Brian Wong" <bwong64@hotmail.com>
> > Cc: pgsql-general@postgresql.org
> > Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request of
> > size ???
> >
> > On Mon, Nov 18, 2013 at 12:40 PM, Brian Wong <bwong64@hotmail.com> wrote:
> >
> >> We'd like to seek out your expertise on postgresql regarding this error
> >> that we're getting in an analytical database.
> >>
> >> Some specs:
> >> proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
> >> memory: 48GB
> >> OS: Oracle Enterprise Linux 6.3
> >> postgresql version: 9.1.9
> >> shared_buffers: 18GB
> >>
> >> After doing a lot of googling, I've tried setting FETCH_COUNT on psql
> >> AND/OR setting work_mem. I'm just not able to work around this issue,
> >> unless if I take most of the MAX() functions out but just one.
> >>
> >
> > What is your work_mem set to?
> > Did testing show that shared_buffers set to 18GB was effective? That seems
> > about 2 to 3 times beyond what you probably want.
> >
>
>
> Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request of size ???
> From: tv@fuzzy.cz
> To: bwong64@hotmail.com
> CC: bricklen@gmail.com; pgsql-general@postgresql.org
>
> On 19 Listopad 2013, 5:30, Brian Wong wrote:
> > I've tried any work_mem value from 1gb all the way up to 40gb, with no
> > effect on the error. I'd like to think of this problem as a server
> > process memory (not the server's buffers) or client process memory issue,
> > primarily because when we tested the error there was no other load
> > whatsoever. Unfortunately, the error doesn't say what kinda memory ran
> > out.
>
> Hi Brian,
>
> first of all, please don't top-post, especially if the previous response
> used bottom-post. Thank you ;-)
>
> Regarding the issue you're seeing:
>
> * Increasing work_mem in hope that it will make the issue go away is
> pointless. In case work_mem is too low, PostgreSQL will automatically
> spill the data to disk (e.g. it won't do a sort in memory, but will do a
> on-disk merge sort). It will never fail, and messages "failed on request
> of size" is actually coming from malloc, when requesting another chunk of
> memory from the OS. So you're hitting a OS-level memory limit.
After changing the shared_buffers setting to 200MB, the developer has confirmed that the Out Of Memory error no longer happens. So thanks folks.
Playing with work_mem was out of desperation. Postgresql simply giving the "Out of memory" error wasn't informative enough about the problem. For example, is it the server buffer, the server process, or the client process that's having a problem?
> Note: AFAIK the only operation that does not spill to disk, and may fail
> with OOM-like errors is hash aggregate. But by increasing the work_mem
> you're actually encouraging PostgreSQL to do this planning error.
>
> I see the query you're running is doing MAX() so it might be hitting this
> issue. How much data are you dealing with? How many groups are in the
> result?
>
> * Setting shared buffers to 18GB is almost certainly a bad choice. I'm yet
> to see a real-world database where shared_buffers over 8GB actually make a
> measurable difference. More is not always better, and you're actually
> reserving memory that can't be used for work_mem (so you're making the
> issue worse). Please, decrease shared_buffers to e.g. 4GB, then try to
> increase it and measure the performance difference.
>
> From: tv@fuzzy.cz
> To: bwong64@hotmail.com
> CC: bricklen@gmail.com; pgsql-general@postgresql.org
>
> On 19 Listopad 2013, 5:30, Brian Wong wrote:
> > I've tried any work_mem value from 1gb all the way up to 40gb, with no
> > effect on the error. I'd like to think of this problem as a server
> > process memory (not the server's buffers) or client process memory issue,
> > primarily because when we tested the error there was no other load
> > whatsoever. Unfortunately, the error doesn't say what kinda memory ran
> > out.
>
> Hi Brian,
>
> first of all, please don't top-post, especially if the previous response
> used bottom-post. Thank you ;-)
>
> Regarding the issue you're seeing:
>
> * Increasing work_mem in hope that it will make the issue go away is
> pointless. In case work_mem is too low, PostgreSQL will automatically
> spill the data to disk (e.g. it won't do a sort in memory, but will do a
> on-disk merge sort). It will never fail, and messages "failed on request
> of size" is actually coming from malloc, when requesting another chunk of
> memory from the OS. So you're hitting a OS-level memory limit.
After changing the shared_buffers setting to 200MB, the developer has confirmed that the Out Of Memory error no longer happens. So thanks folks.
Playing with work_mem was out of desperation. Postgresql simply giving the "Out of memory" error wasn't informative enough about the problem. For example, is it the server buffer, the server process, or the client process that's having a problem?
> Note: AFAIK the only operation that does not spill to disk, and may fail
> with OOM-like errors is hash aggregate. But by increasing the work_mem
> you're actually encouraging PostgreSQL to do this planning error.
>
> I see the query you're running is doing MAX() so it might be hitting this
> issue. How much data are you dealing with? How many groups are in the
> result?
>
> * Setting shared buffers to 18GB is almost certainly a bad choice. I'm yet
> to see a real-world database where shared_buffers over 8GB actually make a
> measurable difference. More is not always better, and you're actually
> reserving memory that can't be used for work_mem (so you're making the
> issue worse). Please, decrease shared_buffers to e.g. 4GB, then try to
> increase it and measure the performance difference.
>
If you look at the documentation about how to configure shared_buffers, it is very unclear to me how I can maximize performance by allocating as much memory to the buffer as possible. On one hand, the documentation says I shouldn't go high on the shared_buffers setting. On the other hand, the more memory you allocate to the buffers, the better the performance is supposedly. So at least as of 9.1, this is annoying. I heard that starting from 9.2, this behavior changed dramatically?
> * So how much memory does the query allocate? Can you watch it over
> top/free to get an idea if it e.g. allocates all available memory, or if
> it allocates only 1GB and then fail, or something?
>
> * I believe you're hitting some sort of limit, imposed by the operating
> system. Please check ulimit and overcommit settings.
>
> * BTW the SO post you mentioned as a perfect match was talking about query
> executed over dblink - are you doing the same? If so, try to move the
> aggregation into the query (so that the aggregation happens on the other
> end).
Nope we're not using dblink in this case.
>
> regards
> Tomas
>
> >
> > --- Original Message ---
> >
> > From: "bricklen" <bricklen@gmail.com>
> > Sent: November 18, 2013 7:25 PM
> > To: "Brian Wong" <bwong64@hotmail.com>
> > Cc: pgsql-general@postgresql.org
> > Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request of
> > size ???
> >
> > On Mon, Nov 18, 2013 at 12:40 PM, Brian Wong <bwong64@hotmail.com> wrote:
> >
> >> We'd like to seek out your expertise on postgresql regarding this error
> >> that we're getting in an analytical database.
> >>
> >> Some specs:
> >> proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
> >> memory: 48GB
> >> OS: Oracle Enterprise Linux 6.3
> >> postgresql version: 9.1.9
> >> shared_buffers: 18GB
> >>
> >> After doing a lot of googling, I've tried setting FETCH_COUNT on psql
> >> AND/OR setting work_mem. I'm just not able to work around this issue,
> >> unless if I take most of the MAX() functions out but just one.
> >>
> >
> > What is your work_mem set to?
> > Did testing show that shared_buffers set to 18GB was effective? That seems
> > about 2 to 3 times beyond what you probably want.
> >
>
>
pgsql-general by date: