Thread: OutOfMemory causing connection leaks
I have a DB where each row is *giagantic* (up to 60mb in size). I have my fetch size set to 200. I realize that this isnt so smart, but the problem it causes is more generic than my specific situation. While reading, an OutOfMemoryError gets thrown and I start closing things down. So far, so good. I close the statement and close the result set - freeing up a bunch of memory. Unfortunately, I get problems when I try to close the connection and release it to the connection pool. This fails - most likely because it was in the middle of reading a tuple when it was rudely interupted by running out of memory. It actually runs out of memory in PGStream.ReceiveTupleV3(), called from QueryExecutorImpl.processResults(). Later, the connection is closed and the PooledConnectionImpl$ConnectionHandler.invoke() is called, which (because we're not AutoCommit) attempt to rollback the transaction with a con.rollback(). This is where the problem occurs. Inside QueryExecutorImpl.processResults(), it gets a "\" from the server (highly likely from the bytea returned from the original query). It, of course, doesnt understand this and throws an "An I/O error occured while sending to the backend." error. The end result is that the connection doesnt appear to be closed and released back to the connection pool. This means that connections to the database are being leaked, and probably a fair amount of memory. This, of course, causes the OutOfMemory error to happen more often. dave
On Fri, 24 Mar 2006, David Blasby wrote: > [OutOfMemory errors leave protocol stream in unknown state] Yeah, that's a problem alright. The easiest thing to do is to treat an out of memory error like an IOException and destroy the whole connection immediately. This isn't terribly friendly though and the vast majority of the errors are going to come from ReceiveTupleV3, so we could put some checks in that path that could get the protocol back into a known state. Adding checks around every allocation isn't going to be worth the effort. Kris Jurka
Kris Jurka <books@ejurka.com> writes: > On Fri, 24 Mar 2006, David Blasby wrote: >> [OutOfMemory errors leave protocol stream in unknown state] > Yeah, that's a problem alright. The easiest thing to do is to treat an > out of memory error like an IOException and destroy the whole connection > immediately. This isn't terribly friendly though and the vast majority > of the errors are going to come from ReceiveTupleV3, so we could put some > checks in that path that could get the protocol back into a known state. > Adding checks around every allocation isn't going to be worth the effort. We went through this same evolution with libpq awhile back, and pretty much did what you say above: make sure that OOM during tuple collection was handled in a friendly way. OOM at other places is likely to leave things a bit broken. So far there've not been many complaints ... regards, tom lane
On Fri, 24 Mar 2006, Kris Jurka wrote: > On Fri, 24 Mar 2006, David Blasby wrote: > >> [OutOfMemory errors leave protocol stream in unknown state] > > Yeah, that's a problem alright. The easiest thing to do is to treat an out > of memory error like an IOException and destroy the whole connection > immediately. This isn't terribly friendly though and the vast majority of > the errors are going to come from ReceiveTupleV3, so we could put some checks > in that path that could get the protocol back into a known state. Adding > checks around every allocation isn't going to be worth the effort. > I've applied a patch for this to 8.0, 8.1, and HEAD cvs branches and new official releases should hopefully be out soon. Kris Jurka
> > I've applied a patch for this to 8.0, 8.1, and HEAD cvs branches and new > official releases should hopefully be out soon. Thanks - I will check it out. dave