Thread: DELETE taking too much memory

DELETE taking too much memory

From
vincent dephily
Date:
Hi,

I have a delete query taking 7.2G of ram (and counting) but I do not
understant why so much memory is necessary. The server has 12G, and
I'm afraid it'll go into swap. Using postgres 8.3.14.

I'm purging some old data from table t1, which should cascade-delete
referencing rows in t2. Here's an anonymized rundown :


# \d t1
                             Table "public.t1"
  Column   |            Type             |             Modifiers
-----------+-----------------------------+---------------------------------
 t1id      | integer                     | not null default
nextval('t1_t1id_seq'::regclass)
(...snip...)
Indexes:
    "message_pkey" PRIMARY KEY, btree (id)
(...snip...)

# \d t2
                               Table "public.t2"
     Column      |            Type             |        Modifiers
-----------------+-----------------------------+-----------------------------
 t2id            | integer                     | not null default
nextval('t2_t2id_seq'::regclass)
 t1id            | integer                     | not null
 foo             | integer                     | not null
 bar             | timestamp without time zone | not null default now()
Indexes:
    "t2_pkey" PRIMARY KEY, btree (t2id)
    "t2_bar_key" btree (bar)
    "t2_t1id_key" btree (t1id)
Foreign-key constraints:
    "t2_t1id_fkey" FOREIGN KEY (t1id) REFERENCES t1(t1id) ON UPDATE
RESTRICT ON DELETE CASCADE

# explain delete from t1 where t1id in (select t1id from t2 where
foo=0 and bar < '20101101');
                               QUERY PLAN
-----------------------------------------------------------------------------
 Nested Loop  (cost=5088742.39..6705282.32 rows=30849 width=6)
   ->  HashAggregate  (cost=5088742.39..5089050.88 rows=30849 width=4)
         ->  Index Scan using t2_bar_key on t2  (cost=0.00..5035501.50
rows=21296354 width=4)
               Index Cond: (bar < '2010-11-01 00:00:00'::timestamp
without time zone)
               Filter: (foo = 0)
   ->  Index Scan using t1_pkey on t1  (cost=0.00..52.38 rows=1 width=10)
         Index Cond: (t1.t1id = t2.t1id)
(7 rows)


Note that the estimate of 30849 rows is way off : there should be
around 55M rows deleted from t1, and 2-3 times as much from t2.

When looking at the plan, I can easily imagine that data gets
accumulated below the nestedloop (thus using all that memory), but why
isn't each entry freed once one row has been deleted from t1 ? That
entry isn't going to be found again in t1 or in t2, so why keep it
around ?

Is there a better way to write this query ? Would postgres 8.4/9.0
handle things better ?



Thanks in advance.


--
Vincent de Phily

Re: [GENERAL] DELETE taking too much memory

From
Guillaume Lelarge
Date:
On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
> Hi,
>
> I have a delete query taking 7.2G of ram (and counting) but I do not
> understant why so much memory is necessary. The server has 12G, and
> I'm afraid it'll go into swap. Using postgres 8.3.14.
>
> I'm purging some old data from table t1, which should cascade-delete
> referencing rows in t2. Here's an anonymized rundown :
>
>
> # \d t1
>                              Table "public.t1"
>   Column   |            Type             |             Modifiers
> -----------+-----------------------------+---------------------------------
>  t1id      | integer                     | not null default
> nextval('t1_t1id_seq'::regclass)
> (...snip...)
> Indexes:
>     "message_pkey" PRIMARY KEY, btree (id)
> (...snip...)
>
> # \d t2
>                                Table "public.t2"
>      Column      |            Type             |        Modifiers
> -----------------+-----------------------------+-----------------------------
>  t2id            | integer                     | not null default
> nextval('t2_t2id_seq'::regclass)
>  t1id            | integer                     | not null
>  foo             | integer                     | not null
>  bar             | timestamp without time zone | not null default now()
> Indexes:
>     "t2_pkey" PRIMARY KEY, btree (t2id)
>     "t2_bar_key" btree (bar)
>     "t2_t1id_key" btree (t1id)
> Foreign-key constraints:
>     "t2_t1id_fkey" FOREIGN KEY (t1id) REFERENCES t1(t1id) ON UPDATE
> RESTRICT ON DELETE CASCADE
>
> # explain delete from t1 where t1id in (select t1id from t2 where
> foo=0 and bar < '20101101');
>                                QUERY PLAN
> -----------------------------------------------------------------------------
>  Nested Loop  (cost=5088742.39..6705282.32 rows=30849 width=6)
>    ->  HashAggregate  (cost=5088742.39..5089050.88 rows=30849 width=4)
>          ->  Index Scan using t2_bar_key on t2  (cost=0.00..5035501.50
> rows=21296354 width=4)
>                Index Cond: (bar < '2010-11-01 00:00:00'::timestamp
> without time zone)
>                Filter: (foo = 0)
>    ->  Index Scan using t1_pkey on t1  (cost=0.00..52.38 rows=1 width=10)
>          Index Cond: (t1.t1id = t2.t1id)
> (7 rows)
>
>
> Note that the estimate of 30849 rows is way off : there should be
> around 55M rows deleted from t1, and 2-3 times as much from t2.
>
> When looking at the plan, I can easily imagine that data gets
> accumulated below the nestedloop (thus using all that memory), but why
> isn't each entry freed once one row has been deleted from t1 ? That
> entry isn't going to be found again in t1 or in t2, so why keep it
> around ?
>
> Is there a better way to write this query ? Would postgres 8.4/9.0
> handle things better ?
>

Do you have any DELETE triggers in t1 and/or t2?


--
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [GENERAL] DELETE taking too much memory

From
Dean Rasheed
Date:
> On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
>> Hi,
>>
>> I have a delete query taking 7.2G of ram (and counting) but I do not
>> understant why so much memory is necessary. The server has 12G, and
>> I'm afraid it'll go into swap. Using postgres 8.3.14.
>>
>> I'm purging some old data from table t1, which should cascade-delete
>> referencing rows in t2. Here's an anonymized rundown :
>>
>> # explain delete from t1 where t1id in (select t1id from t2 where
>> foo=0 and bar < '20101101');

It looks as though you're hitting one of the known issues with
PostgreSQL and FKs. The FK constraint checks and CASCADE actions are
implemented using AFTER triggers, which are queued up during the query
to be executed at the end. For very large queries, this queue of
pending triggers can become very large, using up all available memory.

There's a TODO item to try to fix this for a future version of
PostgreSQL (maybe I'll have another go at it for 9.2), but at the
moment all versions of PostgreSQL suffer from this problem.

The simplest work-around for you might be to break your deletes up
into smaller chunks, say 100k or 1M rows at a time, eg:

delete from t1 where t1id in (select t1id from t2 where foo=0 and bar
< '20101101' limit 100000);

Regards,
Dean

Re: [GENERAL] DELETE taking too much memory

From
Dean Rasheed
Date:
On 8 July 2011 10:44, Vincent de Phily
<vincent.dephily@mobile-devices.fr> wrote:
> On Friday 08 July 2011 10:05:47 Dean Rasheed wrote:
>> > On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
>> >> Hi,
>> >>
>> >> I have a delete query taking 7.2G of ram (and counting) but I do not
>> >> understant why so much memory is necessary. The server has 12G, and
>> >> I'm afraid it'll go into swap. Using postgres 8.3.14.
>> >>
>> >> I'm purging some old data from table t1, which should cascade-delete
>> >> referencing rows in t2. Here's an anonymized rundown :
>> >>
>> >> # explain delete from t1 where t1id in (select t1id from t2 where
>> >> foo=0 and bar < '20101101');
>>
>> It looks as though you're hitting one of the known issues with
>> PostgreSQL and FKs. The FK constraint checks and CASCADE actions are
>> implemented using AFTER triggers, which are queued up during the query
>> to be executed at the end. For very large queries, this queue of
>> pending triggers can become very large, using up all available memory.
>>
>> There's a TODO item to try to fix this for a future version of
>> PostgreSQL (maybe I'll have another go at it for 9.2), but at the
>> moment all versions of PostgreSQL suffer from this problem.
>
> That's very interesting, and a more plausible not-optimized-yet item than my
> guesses so far, thanks. Drop me a mail if you work on this, and I'll find some
> time to test your code.
>
> I'm wondering though : this sounds like the behaviour of a "deferrable" fkey,
> which AFAICS is not the default and not my case ? I haven't explored that area
> of constraints yet, so there's certainly some detail that I'm missing.
>

Yes, it's the same issue that affects deferrable PK and FK
constraints, but even non-deferrable FKs use AFTER ROW triggers that
suffer from this problem. These triggers don't show up in a "\d" from
psql, but they are there (try select * from pg_trigger where
tgconstrrelid = 't1'::regclass) and because they fire AFTER rather
than BEFORE, queuing up large numbers of them is a problem.

Regards,
Dean


>
>> The simplest work-around for you might be to break your deletes up
>> into smaller chunks, say 100k or 1M rows at a time, eg:
>>
>> delete from t1 where t1id in (select t1id from t2 where foo=0 and bar
>> < '20101101' limit 100000);
>
> Yes, that's what we ended up doing. We canceled the query after 24h, shortly
> before the OOM killer would have, and started doing things in smaller batches.
>
>
> --
> Vincent de Phily
>

Re: [GENERAL] DELETE taking too much memory

From
Claudio Freire
Date:
On Fri, Jul 8, 2011 at 12:48 PM, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:
> Yes, it's the same issue that affects deferrable PK and FK
> constraints, but even non-deferrable FKs use AFTER ROW triggers that
> suffer from this problem. These triggers don't show up in a "\d" from
> psql, but they are there (try select * from pg_trigger where
> tgconstrrelid = 't1'::regclass) and because they fire AFTER rather
> than BEFORE, queuing up large numbers of them is a problem.

I would imagine an "easy" solution would be to "compress" the queue by
inserting a single element representing all rows of row version id X.

Ie: a delete or update will need to check all the row versions it
creates with its txid, this txid could be used to represent the rows
that need checking afterwards right?

Re: [GENERAL] DELETE taking too much memory

From
Jose Ildefonso Camargo Tolosa
Date:


On Fri, Jul 8, 2011 at 4:35 AM, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:
> On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
>> Hi,
>>
>> I have a delete query taking 7.2G of ram (and counting) but I do not
>> understant why so much memory is necessary. The server has 12G, and
>> I'm afraid it'll go into swap. Using postgres 8.3.14.
>>
>> I'm purging some old data from table t1, which should cascade-delete
>> referencing rows in t2. Here's an anonymized rundown :
>>
>> # explain delete from t1 where t1id in (select t1id from t2 where
>> foo=0 and bar < '20101101');

It looks as though you're hitting one of the known issues with
PostgreSQL and FKs. The FK constraint checks and CASCADE actions are
implemented using AFTER triggers, which are queued up during the query
to be executed at the end. For very large queries, this queue of
pending triggers can become very large, using up all available memory.

There's a TODO item to try to fix this for a future version of
PostgreSQL (maybe I'll have another go at it for 9.2), but at the
moment all versions of PostgreSQL suffer from this problem.

The simplest work-around for you might be to break your deletes up
into smaller chunks, say 100k or 1M rows at a time, eg:

delete from t1 where t1id in (select t1id from t2 where foo=0 and bar
< '20101101' limit 100000);

I'd like to comment here.... I had serious performance issues with a similar query (planner did horrible things), not sure if planner will do the same dumb thing it did for me, my query was against the same table (ie, t1=t2).  I had this query:

delete from t1 where ctid in (select ctid from t1 where created_at<'20101231' limit 10000);   <--- this was slooooow.  Changed to:

delete from t1 where ctid = any(array(select ctid from t1 where created_at<'20101231' limit 10000));   <--- a lot faster.

So... will the same principle work here?, doing this?:

delete from t1 where t1id = any(array(select t1id from t2 where foo=0 and bar
< '20101101' limit 100000));  <-- would this query be faster then original one?

 

Regards,
Dean

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: DELETE taking too much memory

From
"French, Martin"
Date:
How up to date are the statistics for the tables in question?

What value do you have for effective cache size?

My guess would be that planner thinks the method it is using is right
either for its current row number estimations, or the amount of memory
it thinks it has to play with.

-----Original Message-----
From: pgsql-performance-owner@postgresql.org
[mailto:pgsql-performance-owner@postgresql.org] On Behalf Of vincent
dephily
Sent: 07 July 2011 14:34
To: pgsql-general@postgresql.org; pgsql-performance@postgresql.org
Subject: [PERFORM] DELETE taking too much memory

Hi,

I have a delete query taking 7.2G of ram (and counting) but I do not
understant why so much memory is necessary. The server has 12G, and
I'm afraid it'll go into swap. Using postgres 8.3.14.

I'm purging some old data from table t1, which should cascade-delete
referencing rows in t2. Here's an anonymized rundown :


# \d t1
                             Table "public.t1"
  Column   |            Type             |             Modifiers
-----------+-----------------------------+------------------------------
---
 t1id      | integer                     | not null default
nextval('t1_t1id_seq'::regclass)
(...snip...)
Indexes:
    "message_pkey" PRIMARY KEY, btree (id)
(...snip...)

# \d t2
                               Table "public.t2"
     Column      |            Type             |        Modifiers
-----------------+-----------------------------+------------------------
-----
 t2id            | integer                     | not null default
nextval('t2_t2id_seq'::regclass)
 t1id            | integer                     | not null
 foo             | integer                     | not null
 bar             | timestamp without time zone | not null default now()
Indexes:
    "t2_pkey" PRIMARY KEY, btree (t2id)
    "t2_bar_key" btree (bar)
    "t2_t1id_key" btree (t1id)
Foreign-key constraints:
    "t2_t1id_fkey" FOREIGN KEY (t1id) REFERENCES t1(t1id) ON UPDATE
RESTRICT ON DELETE CASCADE

# explain delete from t1 where t1id in (select t1id from t2 where
foo=0 and bar < '20101101');
                               QUERY PLAN
------------------------------------------------------------------------
-----
 Nested Loop  (cost=5088742.39..6705282.32 rows=30849 width=6)
   ->  HashAggregate  (cost=5088742.39..5089050.88 rows=30849 width=4)
         ->  Index Scan using t2_bar_key on t2  (cost=0.00..5035501.50
rows=21296354 width=4)
               Index Cond: (bar < '2010-11-01 00:00:00'::timestamp
without time zone)
               Filter: (foo = 0)
   ->  Index Scan using t1_pkey on t1  (cost=0.00..52.38 rows=1
width=10)
         Index Cond: (t1.t1id = t2.t1id)
(7 rows)


Note that the estimate of 30849 rows is way off : there should be
around 55M rows deleted from t1, and 2-3 times as much from t2.

When looking at the plan, I can easily imagine that data gets
accumulated below the nestedloop (thus using all that memory), but why
isn't each entry freed once one row has been deleted from t1 ? That
entry isn't going to be found again in t1 or in t2, so why keep it
around ?

Is there a better way to write this query ? Would postgres 8.4/9.0
handle things better ?



Thanks in advance.


--
Vincent de Phily

--
Sent via pgsql-performance mailing list
(pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

___________________________________________________

This email is intended for the named recipient. The information contained
in it is confidential.  You should not copy it for any purposes, nor
disclose its contents to any other party.  If you received this email
in error, please notify the sender immediately via email, and delete it from
your computer.

Any views or opinions presented are solely those of the author and do not
necessarily represent those of the company.

PCI Compliancy: Please note, we do not send or wish to receive banking, credit
or debit card information by email or any other form of communication.

Please try our new on-line ordering system at http://www.cromwell.co.uk/ice

Cromwell Tools Limited, PO Box 14, 65 Chartwell Drive
Wigston, Leicester LE18 1AT. Tel 0116 2888000
Registered in England and Wales, Reg No 00986161
VAT GB 115 5713 87 900
__________________________________________________


Re: [GENERAL] DELETE taking too much memory

From
Vincent de Phily
Date:
On Thursday 07 July 2011 22:26:45 Guillaume Lelarge wrote:
> On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
> > Hi,
> >
> > I have a delete query taking 7.2G of ram (and counting) but I do not
> > understant why so much memory is necessary. The server has 12G, and
> > I'm afraid it'll go into swap. Using postgres 8.3.14.
> >
> > I'm purging some old data from table t1, which should cascade-delete
> > referencing rows in t2. Here's an anonymized rundown :
> >
> >
> > # \d t1
> >
> >                              Table
> >                              "public.t1"
> >
> >   Column   |            Type             |             Modifiers
> >
> > -----------+-----------------------------+------------------------------
> > ---
> >
> >  t1id      | integer                     | not null default
> >
> > nextval('t1_t1id_seq'::regclass)
> > (...snip...)
> >
> > Indexes:
> >     "message_pkey" PRIMARY KEY, btree (id)
> >
> > (...snip...)
> >
> > # \d t2
> >
> >                                Table
> >                                "public.t
> >                                2"
> >
> >      Column      |            Type             |        Modifiers
> >
> > -----------------+-----------------------------+------------------------
> > -----
> >
> >  t2id            | integer                     | not null default
> >
> > nextval('t2_t2id_seq'::regclass)
> >
> >  t1id            | integer                     | not null
> >  foo             | integer                     | not null
> >  bar             | timestamp without time zone | not null default now()
> >
> > Indexes:
> >     "t2_pkey" PRIMARY KEY, btree (t2id)
> >     "t2_bar_key" btree (bar)
> >     "t2_t1id_key" btree (t1id)
> >
> > Foreign-key constraints:
> >     "t2_t1id_fkey" FOREIGN KEY (t1id) REFERENCES t1(t1id) ON UPDATE
> >
> > RESTRICT ON DELETE CASCADE
> >
> > # explain delete from t1 where t1id in (select t1id from t2 where
> > foo=0 and bar < '20101101');
> >
> >                                QUERY PLAN
> >
> > ------------------------------------------------------------------------
> > -----
> >
> >  Nested Loop  (cost=5088742.39..6705282.32 rows=30849 width=6)
> >
> >    ->  HashAggregate  (cost=5088742.39..5089050.88 rows=30849
> >    width=4)
> >
> >          ->  Index Scan using t2_bar_key on t2
> >          (cost=0.00..5035501.50
> >
> > rows=21296354 width=4)
> >
> >                Index Cond: (bar < '2010-11-01
> >                00:00:00'::timestamp
> >
> > without time zone)
> >
> >                Filter: (foo = 0)
> >
> >    ->  Index Scan using t1_pkey on t1  (cost=0.00..52.38 rows=1
> >    width=10)
> >
> >          Index Cond: (t1.t1id = t2.t1id)
> >
> > (7 rows)
> >
> >
> > Note that the estimate of 30849 rows is way off : there should be
> > around 55M rows deleted from t1, and 2-3 times as much from t2.
> >
> > When looking at the plan, I can easily imagine that data gets
> > accumulated below the nestedloop (thus using all that memory), but why
> > isn't each entry freed once one row has been deleted from t1 ? That
> > entry isn't going to be found again in t1 or in t2, so why keep it
> > around ?
> >
> > Is there a better way to write this query ? Would postgres 8.4/9.0
> > handle things better ?
>
> Do you have any DELETE triggers in t1 and/or t2?

No, there are triggers on insert/update to t1 which both insert into t2, but
no delete trigger. Deletions do cascade from t1 to t2 because of the foreign
key.
--
Vincent de Phily
Mobile Devices
+33 (0) 142 119 325
+353 (0) 85 710 6320

Warning
This message (and any associated files) is intended only for the use of its
intended recipient and may contain information that is confidential, subject
to copyright or constitutes a trade secret. If you are not the intended
recipient you are hereby notified that any dissemination, copying or
distribution of this message, or files associated with this message, is
strictly prohibited. If you have received this message in error, please
notify us immediately by replying to the message and deleting it from your
computer. Any views or opinions presented are solely those of the author
vincent.dephily@mobile-devices.fr and do not necessarily represent those of
the
company. Although the company has taken reasonable precautions to ensure no
viruses are present in this email, the company cannot accept responsibility
for any loss or damage arising from the use of this email or attachments.

Re: DELETE taking too much memory

From
Vincent de Phily
Date:
On Thursday 07 July 2011 19:54:08 French, Martin wrote:
> How up to date are the statistics for the tables in question?
>
> What value do you have for effective cache size?
>
> My guess would be that planner thinks the method it is using is right
> either for its current row number estimations, or the amount of memory
> it thinks it has to play with.

Not very up to date I'm afraid (as shown by the low estimate of deleted rows).
Table t2 has been insert-only since its re-creation (that's another story),
while t1 is your classic insert-many, update-recent.

We haven't tweaked effective cache size yet, it's on the TODO... like many
other things :/
--
Vincent de Phily

Re: DELETE taking too much memory

From
"French, Martin"
Date:
If the query planner thinks it has the default amount of memory (128MB)
and the stats are out of date, then it will by no means be able to plan
proper execution.

I would recommend setting the effective_cache_size to an appropriate
value, running "analyze" on both tables with an appropriate stats
target, and then explaining the query again to see if it's more
accurate.

Cheers

-----Original Message-----
From: Vincent de Phily [mailto:vincent.dephily@mobile-devices.fr]
Sent: 08 July 2011 10:20
To: French, Martin
Cc: pgsql-general@postgresql.org; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] DELETE taking too much memory

On Thursday 07 July 2011 19:54:08 French, Martin wrote:
> How up to date are the statistics for the tables in question?
>
> What value do you have for effective cache size?
>
> My guess would be that planner thinks the method it is using is right
> either for its current row number estimations, or the amount of memory
> it thinks it has to play with.

Not very up to date I'm afraid (as shown by the low estimate of deleted
rows).
Table t2 has been insert-only since its re-creation (that's another
story),
while t1 is your classic insert-many, update-recent.

We haven't tweaked effective cache size yet, it's on the TODO... like
many
other things :/
--
Vincent de Phily

___________________________________________________

This email is intended for the named recipient. The information contained
in it is confidential.  You should not copy it for any purposes, nor
disclose its contents to any other party.  If you received this email
in error, please notify the sender immediately via email, and delete it from
your computer.

Any views or opinions presented are solely those of the author and do not
necessarily represent those of the company.

PCI Compliancy: Please note, we do not send or wish to receive banking, credit
or debit card information by email or any other form of communication.

Please try our new on-line ordering system at http://www.cromwell.co.uk/ice

Cromwell Tools Limited, PO Box 14, 65 Chartwell Drive
Wigston, Leicester LE18 1AT. Tel 0116 2888000
Registered in England and Wales, Reg No 00986161
VAT GB 115 5713 87 900
__________________________________________________


Re: [GENERAL] DELETE taking too much memory

From
Vincent de Phily
Date:
On Friday 08 July 2011 10:05:47 Dean Rasheed wrote:
> > On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
> >> Hi,
> >>
> >> I have a delete query taking 7.2G of ram (and counting) but I do not
> >> understant why so much memory is necessary. The server has 12G, and
> >> I'm afraid it'll go into swap. Using postgres 8.3.14.
> >>
> >> I'm purging some old data from table t1, which should cascade-delete
> >> referencing rows in t2. Here's an anonymized rundown :
> >>
> >> # explain delete from t1 where t1id in (select t1id from t2 where
> >> foo=0 and bar < '20101101');
>
> It looks as though you're hitting one of the known issues with
> PostgreSQL and FKs. The FK constraint checks and CASCADE actions are
> implemented using AFTER triggers, which are queued up during the query
> to be executed at the end. For very large queries, this queue of
> pending triggers can become very large, using up all available memory.
>
> There's a TODO item to try to fix this for a future version of
> PostgreSQL (maybe I'll have another go at it for 9.2), but at the
> moment all versions of PostgreSQL suffer from this problem.

That's very interesting, and a more plausible not-optimized-yet item than my
guesses so far, thanks. Drop me a mail if you work on this, and I'll find some
time to test your code.

I'm wondering though : this sounds like the behaviour of a "deferrable" fkey,
which AFAICS is not the default and not my case ? I haven't explored that area
of constraints yet, so there's certainly some detail that I'm missing.


> The simplest work-around for you might be to break your deletes up
> into smaller chunks, say 100k or 1M rows at a time, eg:
>
> delete from t1 where t1id in (select t1id from t2 where foo=0 and bar
> < '20101101' limit 100000);

Yes, that's what we ended up doing. We canceled the query after 24h, shortly
before the OOM killer would have, and started doing things in smaller batches.


--
Vincent de Phily

Re: DELETE taking too much memory

From
Vincent de Phily
Date:
On Friday 08 July 2011 10:31:33 French, Martin wrote:
> If the query planner thinks it has the default amount of memory (128MB)
> and the stats are out of date, then it will by no means be able to plan
> proper execution.
>
> I would recommend setting the effective_cache_size to an appropriate
> value, running "analyze" on both tables with an appropriate stats
> target, and then explaining the query again to see if it's more
> accurate.

Yes, I'll schedule those two to run during the night and repost an explain,
for information. However, we worked around the initial problem by running the
delete in smaller batches.

Thanks.
--
Vincent de Phily