Thread: beta1 & beta2 & Windows & heavy load

beta1 & beta2 & Windows & heavy load

From
Daniel Schuchardt
Date:
Hi list,

is it is known that postgres 8.0 (beta1 and beta2 both same behavoir) 
will eat tons of memory on Windows under heavy load?

Scenario:

I have a big delete with about 56000 rows. (DELETE FROM mytable;)
There are triggers (after delete) updating 2 or 3 other tables. (there 
are triggers on this tables too).

first everything seems ok but after a short tine postgres starts eating 
memory. On my mashine one postgres.exe sometimes eates 300MB or more.

prob one : everything becomes slow because windows is swapping...
main problem : if there is to less virutal memory postgres will abort 
because of to less memory. For a 56000 row delete you need > 500MB of 
virtual ram or postgres aborts.

I noticed the same behavoir when do lets say 75000 rows in a table 
(triggers updating other tables also).

In both situations without transactions.

Is this is a bug?

Daniel.


Re: beta1 & beta2 & Windows & heavy load

From
Tom Lane
Date:
Daniel Schuchardt <daniel_schuchardt@web.de> writes:
> is it is known that postgres 8.0 (beta1 and beta2 both same behavoir) 
> will eat tons of memory on Windows under heavy load?

No.

I tried setting up a test case with a 64K-row master table, a slave
table with the same 64K rows and an ON DELETE CASCADE reference to the
master, and a third table with the same 64K values and an ON DELETE
CASCADE reference to the second.  It took a little bit of time to
delete all the master rows, but memory usage was perfectly steady.

Can you provide a concrete test case?
        regards, tom lane


Re: beta1 & beta2 & Windows & heavy load

From
Gavin Sherry
Date:
On Sat, 11 Sep 2004, Daniel Schuchardt wrote:

> Hi list,
>
> is it is known that postgres 8.0 (beta1 and beta2 both same behavoir)
> will eat tons of memory on Windows under heavy load?

I do not believe that this is known.

>
> Scenario:
>
> I have a big delete with about 56000 rows. (DELETE FROM mytable;)
> There are triggers (after delete) updating 2 or 3 other tables. (there
> are triggers on this tables too).

What language are the triggers written in? Are they just constraints?

>
> first everything seems ok but after a short tine postgres starts eating
> memory. On my mashine one postgres.exe sometimes eates 300MB or more.
>
> prob one : everything becomes slow because windows is swapping...
> main problem : if there is to less virutal memory postgres will abort
> because of to less memory. For a 56000 row delete you need > 500MB of
> virtual ram or postgres aborts.
>
> I noticed the same behavoir when do lets say 75000 rows in a table
> (triggers updating other tables also).
>
> In both situations without transactions.
>
> Is this is a bug?
>

Seems that way. Please provide a reproducible test so that others can
recreate.

Thanks,

Gavin


Re: beta1 & beta2 & Windows & heavy load

From
Tom Lane
Date:
Daniel Schuchardt <daniel_schuchardt@web.de> writes:
> houres later I'v located the problem. Its not heavy load but
> subtransactions in Triggers. It's very easy to recreate:

> the problem is this Syntax :

>   CREATE OR REPLACE FUNCTION do_standard_mgc() RETURNS TRIGGER AS'
>   BEGIN
>    BEGIN
>     --prob also occurs in this case (empty subtransaction)
>    EXCEPTION
>     WHEN OTHERS THEN
>         PERFORN NULL;
>    END;
>    RETURN new;
>   END'LANGUAGE plpgsql;

> It seems that this subtransactions allocates mem that is never freed.

Well, yes, it has to take a lock on the subtransaction XID, which will
be held until main transaction commit.  I'm not sure we have much of a
choice about this --- although it does seem annoying to have a
shared-memory-size constraint on how many subtransactions you can have.

The shared memory should be freed on failure, though.  Is that part
reproducible with current sources?
        regards, tom lane


Re: beta1 & beta2 & Windows & heavy load

From
Bruce Momjian
Date:
Tom Lane wrote:
> Daniel Schuchardt <daniel_schuchardt@web.de> writes:
> > houres later I'v located the problem. Its not heavy load but
> > subtransactions in Triggers. It's very easy to recreate:
> 
> > the problem is this Syntax :
> 
> >   CREATE OR REPLACE FUNCTION do_standard_mgc() RETURNS TRIGGER AS'
> >   BEGIN
> >    BEGIN
> >     --prob also occurs in this case (empty subtransaction)
> >    EXCEPTION
> >     WHEN OTHERS THEN
> >         PERFORN NULL;
> >    END;
> >    RETURN new;
> >   END'LANGUAGE plpgsql;
> 
> > It seems that this subtransactions allocates mem that is never freed.
> 
> Well, yes, it has to take a lock on the subtransaction XID, which will
> be held until main transaction commit.  I'm not sure we have much of a
> choice about this --- although it does seem annoying to have a
> shared-memory-size constraint on how many subtransactions you can have.

You mean 64 (the number of object locks)?  Can you clarify why the
subtransaction is locked in this case and not others?

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: beta1 & beta2 & Windows & heavy load

From
Tom Lane
Date:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
>> Well, yes, it has to take a lock on the subtransaction XID, which will
>> be held until main transaction commit.  I'm not sure we have much of a
>> choice about this --- although it does seem annoying to have a
>> shared-memory-size constraint on how many subtransactions you can have.

> You mean 64 (the number of object locks)?

No, max_locks_per_transaction * max_connections.  In practice it seems
to get unhappy at about 10000 locks given default shared-memory sizing.
This is because we leave some overhead slop that the locks table can
expand into before failing altogether.

> Can you clarify why the
> subtransaction is locked in this case and not others?

It's locked in all cases.  A *failed* subtransaction will release its
lock, but a non-failed one will hold the lock till main commit.

We could revert to Alvaro's initial design in which subxacts didn't take
out separate locks on their XIDs; this would make XactLockTableWait a
lot slower though, and probably subject to unwanted deadlocks.  You
really want to release waiters for a subxact as soon as the subxact
fails, rather than making them wait around for the parent.

Right now I'm not seeing a cure that's not worse than the disease.
        regards, tom lane


Re: beta1 & beta2 & Windows & heavy load

From
Tom Lane
Date:
Daniel Schuchardt <daniel_schuchardt@web.de> writes:
> now I get
> ERROR:  buffer XYZ is not owned by resource owner TopTransaction

Yeah, I ran into that earlier today while trying to reproduce your
problem.  It's fixed by this pending patch.  I didn't see any freeze-up
of the system after running out of lock memory, though.  It seemed to
release the locks just fine.

*** src/pl/plpgsql/src/pl_exec.c.orig    Sun Aug 29 22:58:09 2004
--- src/pl/plpgsql/src/pl_exec.c    Sun Sep 12 17:04:30 2004
***************
*** 897,902 ****
--- 897,903 ----          * sub-transaction          */         MemoryContext oldcontext = CurrentMemoryContext;
+         ResourceOwner oldowner = CurrentResourceOwner;         volatile bool caught = false;         int
xrc;
 
***************
*** 907,918 ****         BeginInternalSubTransaction(NULL);         /* Want to run statements inside function's memory
context*/         MemoryContextSwitchTo(oldcontext);         if ((xrc = SPI_connect()) != SPI_OK_CONNECT)
elog(ERROR,"SPI_connect failed: %s",                  SPI_result_code_string(xrc));          PG_TRY();
 
!         rc = exec_stmts(estate, block->body);         PG_CATCH();         {             ErrorData  *edata;
--- 908,922 ----         BeginInternalSubTransaction(NULL);         /* Want to run statements inside function's memory
context*/         MemoryContextSwitchTo(oldcontext);
 
+          if ((xrc = SPI_connect()) != SPI_OK_CONNECT)             elog(ERROR, "SPI_connect failed: %s",
  SPI_result_code_string(xrc));          PG_TRY();
 
!         {
!             rc = exec_stmts(estate, block->body);
!         }         PG_CATCH();         {             ErrorData  *edata;
***************
*** 927,932 ****
--- 931,937 ----             /* Abort the inner transaction (and inner SPI connection) */
RollbackAndReleaseCurrentSubTransaction();            MemoryContextSwitchTo(oldcontext);
 
+             CurrentResourceOwner = oldowner;              SPI_pop(); 
***************
*** 958,965 ****
--- 963,973 ----             if ((xrc = SPI_finish()) != SPI_OK_FINISH)                 elog(ERROR, "SPI_finish failed:
%s",                     SPI_result_code_string(xrc));
 
+              ReleaseCurrentSubTransaction();             MemoryContextSwitchTo(oldcontext);
+             CurrentResourceOwner = oldowner;
+              SPI_pop();         }     }

        regards, tom lane


Re: beta1 & beta2 & Windows & heavy load

From
Tom Lane
Date:
I wrote:
> We could revert to Alvaro's initial design in which subxacts didn't take
> out separate locks on their XIDs; this would make XactLockTableWait a
> lot slower though, and probably subject to unwanted deadlocks.  You
> really want to release waiters for a subxact as soon as the subxact
> fails, rather than making them wait around for the parent.

> Right now I'm not seeing a cure that's not worse than the disease.

After awhile I thought of a possibly rather crocky solution:

1. During CommitSubTransaction, release the lock on the subtransaction's
XID (but not any of its other locks).  This means that the number of
locks held due to just transaction XIDs is no more than the subxact
nesting depth, rather than growing with the number of subcommitted XIDs.

2. XactLockTableWait would have to do something like this:
for (;;){    wait for lock associated with xid;    if (!TransactionIdIsInProgress(xid))        break;    xid =
SubTransGetParent(xid);}

This relies on the fact that TransactionIdIsInProgress will return true
for subcommitted children of still-open parents, but not for aborted
children.  So once we get past the wait, we check to see which case
applies, and wait for the subxact's parent if necessary.  If the subxact
aborted then we need no longer wait.

I'm not totally convinced that this is race-condition-free, but I
haven't been able to poke a hole in it yet.

The other question is whether it's worth some extra overhead in
XactLockTableWait to save on shared memory.  I'm inclined to think it
is, mainly because you don't get into XactLockTableWait in the first
place unless you're going to have to block.  (I'm pretty sure all paths
leading to it have already determined that the other transaction is or
very recently was InProgress.)  So you lose anyway, and losing a few
more microseconds isn't that big a deal.

Comments?
        regards, tom lane


Re: beta1 & beta2 & Windows & heavy load

From
Alvaro Herrera
Date:
On Sun, Sep 12, 2004 at 08:28:42PM -0400, Tom Lane wrote:

> 1. During CommitSubTransaction, release the lock on the subtransaction's
> XID (but not any of its other locks).  This means that the number of
> locks held due to just transaction XIDs is no more than the subxact
> nesting depth, rather than growing with the number of subcommitted XIDs.
> 
> 2. XactLockTableWait would have to do something like this:
> 
>     for (;;)
>     {
>         wait for lock associated with xid;
>         if (!TransactionIdIsInProgress(xid))
>             break;
>         xid = SubTransGetParent(xid);
>     }

FWIW, this works for me.

> The other question is whether it's worth some extra overhead in
> XactLockTableWait to save on shared memory.  I'm inclined to think it
> is, mainly because you don't get into XactLockTableWait in the first
> place unless you're going to have to block.

Furthermore, the savings are possibly unbounded, so this is a good idea
anyway.  From the example case it's not hard to imagine situations where
we will want lots of subtransactions; just using exceptions in functions
will implicitly open new ones!  (This kind of bothers me because of the
possibly very fast use of Xids, accelerating Xid counter wraparound ...)

-- 
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"La principal característica humana es la tontería"
(Augusto Monterroso)



Re: beta1 & beta2 & Windows & heavy load

From
Tom Lane
Date:
Alvaro Herrera <alvherre@dcc.uchile.cl> writes:
> From the example case it's not hard to imagine situations where
> we will want lots of subtransactions; just using exceptions in functions
> will implicitly open new ones!  (This kind of bothers me because of the
> possibly very fast use of Xids, accelerating Xid counter wraparound ...)

Yeah, that's been bothering me too, but I don't see any easy way around
it.  Without some major rethinking, a subxact has got to have its own
XID that it can put on tuples that it writes; else we cannot cope with
rollback of the subxact, which is rather the point after all...

However: using exceptions in functions is a good example of "trivial
subxacts" that aren't ever going to write any tuples; and so at least
in principle it might be possible to avoid assigning an XID to such
subxacts.

The idle thoughts I had about this were:

1. Do not assign an XID nor take a lock in StartSubTransaction; instead
set the xid field of the transaction state record to InvalidXid.

2. In GetCurrentTransactionId, check for InvalidXid; if seen, assign the
XID (and take the lock on it) before returning.

3. Skip the obvious things at subxact commit/abort time.  A subxact that
has never called GetCurrentTransactionId has certainly never emitted a
tuple, so it need not have any record in WAL.

I can see a number of gotchas to this, but the ones I've thought of so
far all have plausible workarounds:

* I think some places assume that subxacts have XIDs >= their parent.
We could enforce this by requiring GetCurrentTransactionId to
recursively ensure that the parent has an XID before assigning one
to the child.  This shouldn't cause any extra overhead for nested
"trivial transactions".

* We use the subxact ID for disambiguation of various modules'
per-subtransaction state.  However a possible solution is already known
from the rewrite I recently did on trigger.c.  It turned out in that
case to be equally convenient, if not more so, for the trigger module to
generate its own unique IDs for subtransactions (really more like
command counter values than XIDs).  We could propagate the same concept
into the other modules that presently call GetCurrentTransactionId
without the intention of pushing the XID into the database.

So it seems potentially doable, but maybe it's the sort of thing that
had better wait for 8.1.

I will go ahead and make that change in lock release and
XactLockTableWait, though, since that's clearly a killer.
        regards, tom lane


Re: beta1 & beta2 & Windows & heavy load

From
Daniel Schuchardt
Date:
Tom Lane schrieb:

> 
> Can you provide a concrete test case?
> 

houres later I'v located the problem. Its not heavy load but
subtransactions in Triggers. It's very easy to recreate:

the problem is this Syntax :
 CREATE OR REPLACE FUNCTION do_standard_mgc() RETURNS TRIGGER AS' BEGIN  BEGIN   --prob also occurs in this case (empty
subtransaction) EXCEPTIONWHEN OTHERS THEN    PERFORN NULL;  END;  RETURN new; END'LANGUAGE plpgsql;
 

It seems that this subtransactions allocates mem that is never freed. So 
create a big table with such a trigger - func (in my case after insert 
or update), do a begin and a update all over the table and postgres will 
use more and more memory as long there is memory available and then abort.

I can post a sample script but i think it's easy to recreate this prob.

Hope that helps,

Daniel.

BTW : how to ignore only duplicate - unique key exceptions?

is there sth like WHEN dup_unique_key THEN?



Re: beta1 & beta2 & Windows & heavy load

From
Daniel Schuchardt
Date:
Tom Lane schrieb:

> Yeah, I ran into that earlier today while trying to reproduce your
> problem.  It's fixed by this pending patch.  I didn't see any freeze-up
> of the system after running out of lock memory, though.  It seemed to
> release the locks just fine.

Yeah this error is away now.

Now i got the same behavoir as before. Notice that I'v never opened a 
transaction in the following scenario. I'm unable to do anything with 
this connection.

example:

C:\postgres\bin>psql TEST
Welcome to psql 8.0.0beta2, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms       \h for help with SQL commands       \? for help with psql commands
\gor terminate with semicolon to execute query       \q to quit
 

Warning: Console codepage (850) differs from windows codepage (1252)         8-bit characters will not work correctly.
SeePostgreSQL         documentation "Installation on Windows" for details.
 

TEST=# SELECT count(*) FROM art; count
------- 13534
(1 row)
TEST=# UPDATE art SET ak_res=0 WHERE ak_nr='###';
UPDATE 0
TEST=# UPDATE art SET ak_res=0;
WARNING:  out of shared memory
CONTEXT:  PL/pgSQL function "do_standard_mgc" line 2 at block variables 
initialization
WARNING:  StartAbortedSubTransaction while in START state
ERROR:  out of shared memory
HINT:  You may need to increase max_locks_per_transaction.
CONTEXT:  PL/pgSQL function "do_standard_mgc" line 2 at block variables 
initialization
TEST=# UPDATE art SET ak_res=0 WHERE ak_nr='###';
ERROR:  current transaction is aborted, commands ignored until end of 
transaction block
TEST=# SELECT count(*) FROM art;
ERROR:  current transaction is aborted, commands ignored until end of 
transaction block
TEST=# ROLLBACK;
ROLLBACK
TEST=# SELECT count(*) FROM art;
WARNING:  out of shared memory
ERROR:  out of shared memory
TEST=#

Daniel


Re: beta1 & beta2 & Windows & heavy load

From
Daniel Schuchardt
Date:
Tom Lane schrieb:
>Can I see a stack trace from that?  Or at least the verbose form of the>error message?>
actually i dunno know how to create a stack trace. I will do some 
research. Verbose output (debug5)

TEST=# UPDATE art SET ak_res=0;
WARNING:  out of shared memory
CONTEXT:  PL/pgSQL function "do_standard_mgc" line 2 at block variables 
initialization
WARNING:  StartAbortedSubTransaction while in START state
ERROR:  out of shared memory
HINT:  You may need to increase max_locks_per_transaction.
CONTEXT:  PL/pgSQL function "do_standard_mgc" line 2 at block variables 
initialization
 WARNING:  01000: StartAbortedSubTransaction while in START state LOCATION:  StartAbortedSubTransaction, xact.c:3207
 ERROR:  53200: out of shared memory HINT:  You may need to increase max_locks_per_transaction. CONTEXT:  PL/pgSQL
function"do_standard_mgc" line 2 at block 
 
variables initialization LOCATION:  LockAcquire, lock.c:562 STATEMENT:  UPDATE art SET ak_res=0;
 WARNING:  01000: StartAbortedSubTransaction while in START state LOCATION:  StartAbortedSubTransaction, xact.c:3207

TEST=# ROLLBACK;
ROLLBACK
TEST=# SELECT count(*) FROM art;
WARNING:  out of shared memory
ERROR:  out of shared memory
TEST=#
 WARNING:  53200: out of shared memory LOCATION:  ShmemAlloc, shmem.c:185 STATEMENT:  SELECT count(*) FROM art;
 ERROR:  53200: out of shared memory LOCATION:  BufTableInsert, buf_table.c:93 STATEMENT:  SELECT count(*) FROM art;


Daniel.


Re: beta1 & beta2 & Windows & heavy load

From
Tom Lane
Date:
Daniel Schuchardt <daniel_schuchardt@web.de> writes:
>   Tom Lane schrieb:
>>> Can I see a stack trace from that?  Or at least the verbose form of the
>>> error message?

>   WARNING:  53200: out of shared memory
>   LOCATION:  ShmemAlloc, shmem.c:185
>   STATEMENT:  SELECT count(*) FROM art;

>   ERROR:  53200: out of shared memory
>   LOCATION:  BufTableInsert, buf_table.c:93
>   STATEMENT:  SELECT count(*) FROM art;

Hmm.  Okay, I think I see what is going on here.  dynahash's HASH_REMOVE
operation sticks the freed space into a freelist associated with the
particular hashtable, but it never releases it for "general" use (and
since we do not have any general-purpose alloc/free code for shared
memory, there's really no way it could do that).

So what is happening is that the subxact-open loop creates new locktable
hash entries until it's run the free space in shared memory down to nil,
and then it errors out.  The lock hash entries are then released ... but
only to the freelist associated with the lock table.  If the shared hash
table for the buffer pool needs to grow afterwards, it's out of luck.

Had you been running the server for very long before forcing the error,
I don't think this would have happened, because the buffer hashtable
would have already expanded to its full working size.

I can think of various tweaks we could make to the hash management code
to avoid this scenario, but I'm not sure that any of those cures aren't
worse than the disease: they all seem to reduce the flexibility of
shared memory allocation instead of increasing it.  And I don't want
to create a full-fledged alloc/free package for shared memory --- the
bang-for-buck ratio for that is way too low.  So I'm inclined to leave
this alone.  Once we fix subxacts to not hold their XID locks after
subcommit, the probability of a problem should go back down to the same
low value that's allowed us to ignore this risk for the past many years.
        regards, tom lane


Re: beta1 & beta2 & Windows & heavy load

From
Sailesh Krishnamurthy
Date:
>>>>> "Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes:
   Tom> instead of increasing it.  And I don't want to create a   Tom> full-fledged alloc/free package for shared
memory--- the   Tom> bang-for-buck ratio for that is way too low.  So I'm inclined
 

I think I've said it before, but we actually have such a beast - a
shared memory MemoryContext that uses libmm (which apparently Apache
also uses). It should be fairly easy to patch in if it's worth it for
you folks.

-- 
Pip-pip
Sailesh
http://www.cs.berkeley.edu/~sailesh




Re: beta1 & beta2 & Windows & heavy load

From
Daniel Schuchardt
Date:
Tom Lane schrieb:
> 
> Had you been running the server for very long before forcing the error,
> I don't think this would have happened, because the buffer hashtable
> would have already expanded to its full working size.

Yes, you are right - this was a fresh started pgserver.

> 
> Once we fix subxacts to not hold their XID locks after
> subcommit, the probability of a problem should go back down to the same
> low value that's allowed us to ignore this risk for the past many years.
> 

I think so, too.


Daniel