Thread: Out of memory in create index

Out of memory in create index

From
"David Wilson"
Date:
After dropping an index to do some full-table updating, I'm running
into an out of memory issue recreating one of my indices. This is on
8.3 running on linux.

The table in question has about 300m rows. The index is on a single
integer column. There are approximately 4000 unique values among the
rows.

create index val_datestamp_idx on vals(datestamp) tablespace space2;

About 30 seconds into the query, I get:
ERROR:  out of memory
DETAIL:  Failed on request of size 536870912.

Increasing maintenance_work_mem from 1GB to 2GB changed nothing at
all- exact same error at exact same time. Watching memory on the
machine shows the out of memory error happens when the machine is only
at about 35% user. create index concurrently shows an identical error.

Two other indexes (multicolumn) on the same table have already been
successfully recreated, so this puzzles me.

Actually, while I was writing this, I added an additional column to
the index and it now appears to be completing (memory has reached
about the point it had been failing at and is now holding steady, and
the query has been going for significantly longer than the 30 seconds
or so it took to error out previously). I sort by both columns at
times, so the extra column may in fact turn out to be useful, but the
failure of the single column create index in the face of the other
successful creates has me confused. Can anyone shed some light on the
situation?
--
- David T. Wilson
david.t.wilson@gmail.com

Re: Out of memory in create index

From
Gregory Stark
Date:
"David Wilson" <david.t.wilson@gmail.com> writes:

> create index val_datestamp_idx on vals(datestamp) tablespace space2;
>
> About 30 seconds into the query, I get:
> ERROR:  out of memory
> DETAIL:  Failed on request of size 536870912.
>
> Increasing maintenance_work_mem from 1GB to 2GB changed nothing at
> all- exact same error at exact same time. Watching memory on the
> machine shows the out of memory error happens when the machine is only
> at about 35% user. create index concurrently shows an identical error.

Try *lowering* maintenance_work_mem. That's how much memory you're telling the
index build to use. Evidently your machine doesn't have enough RAM/swap to
handle 1G of temporary sort space. In practice values over a few hundred megs
don't seem to help much anyways. Try 512M or 256M.

Also, a little known fact is that an index build can actually allocate
maintenance_work_mem plus an extra work_mem. So if you have work_mem set
unreasonably high that could be contributing to the problem.

> Actually, while I was writing this, I added an additional column to
> the index and it now appears to be completing (memory has reached
> about the point it had been failing at and is now holding steady, and
> the query has been going for significantly longer than the 30 seconds
> or so it took to error out previously). I sort by both columns at
> times, so the extra column may in fact turn out to be useful, but the
> failure of the single column create index in the face of the other
> successful creates has me confused. Can anyone shed some light on the
> situation?

How much memory the OS allows Postgres to allocate will depend on a lot of
external factors. At a guess you had some other services or queries running at
the same time the first time which reduced the available memory.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!

Re: Out of memory in create index

From
"David Wilson"
Date:
On Mon, Oct 13, 2008 at 6:44 AM, Gregory Stark <stark@enterprisedb.com> wrote:

> How much memory the OS allows Postgres to allocate will depend on a lot of
> external factors. At a guess you had some other services or queries running at
> the same time the first time which reduced the available memory.

I'm sorry- I was insufficiently clear. Postgres was the only service
running, and there were no additional queries happening at the same
time. (This database is on a dedicated machine; the only other things
that run are some decision-support applications that were all off at
the time.) In addition, the 35% memory usage number was for user-space
processes in total, not for postgres specifically; the swap space was
completely clear. maintenance_work_mem + work_mem is well under the
total amount of RAM on the system, and certainly well under RAM +
swap.

I'll give a try to building that index with a lower
maintenance_work_mem this evening when I can shut off the other
processes again, though given the above it strikes me as unlikely to
be the problem.

Also, the thing that has me even more confused is the fact that it
worked when I added an additional column to the index.


--
- David T. Wilson
david.t.wilson@gmail.com