Thread: Possible limit on transaction size?

Possible limit on transaction size?

From
bbeyer@purdue.edu
Date:
Hello,

I was curious if there was a known size limit for Postgres transactions. In
order to import data into my database, my Java application begins a transaction,
imports the data (into several different tables), and then commits the
transaction on success. It works great on small data sets, but on the large
ones, it doesn't work so well.

About 150 million records into the import process, I get the following error:

ERROR: lock AccessShareLock on object 51533/51769/0 is already held
CONTEXT: SQL statement "INSERT INTO table_name (col1, col2, col3, col4) VALUES
(val1, val2, val3, val4)"

     PL/pgSQL function "create_import" line 19 at SQL statement
STATEMENT: select * from create_import($1,$2,$3,$4,$5,$6) as result

I know my server can handle this much data (24GB RAM, 2 TB SAS disks, etc.), but
it doesn't seem like Postgres likes the large transactions.

Any thoughts?

Thank you for your time,

Brian Beyer
Purdue University
bbeyer@purdue.edu

Re: Possible limit on transaction size?

From
Martin Gainty
Date:
Good Morning Brian-

sounds like a very nasty bug first discovered in 07
http://archives.postgresql.org/pgsql-bugs/2007-04/msg00075.php

the bug was supposed to be eradicated post 8.3.3

which version are you using which exhibits this behaviour?
thanks/
Martin
______________________________________________
Disclaimer and confidentiality note
Everything in this e-mail and any attachments relates to the official business of Sender. This transmission is of a confidential nature and Sender does not endorse distribution to any party other than intended recipient. Sender does not necessarily endorse content contained within this transmission.


> Date: Mon, 8 Sep 2008 09:17:49 -0400
> From: bbeyer@purdue.edu
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Possible limit on transaction size?
>
> Hello,
>
> I was curious if there was a known size limit for Postgres transactions. In
> order to import data into my database, my Java application begins a transaction,
> imports the data (into several different tables), and then commits the
> transaction on success. It works great on small data sets, but on the large
> ones, it doesn't work so well.
>
> About 150 million records into the import process, I get the following error:
>
> ERROR: lock AccessShareLock on object 51533/51769/0 is already held
> CONTEXT: SQL statement "INSERT INTO table_name (col1, col2, col3, col4) VALUES
> (val1, val2, val3, val4)"
>
> PL/pgSQL function "create_import" line 19 at SQL statement
> STATEMENT: select * from create_import($1,$2,$3,$4,$5,$6) as result
>
> I know my server can handle this much data (24GB RAM, 2 TB SAS disks, etc.), but
> it doesn't seem like Postgres likes the large transactions.
>
> Any thoughts?
>
> Thank you for your time,
>
> Brian Beyer
> Purdue University
> bbeyer@purdue.edu
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


See how Windows Mobile brings your life together—at home, work, or on the go. See Now

Re: Possible limit on transaction size?

From
Tom Lane
Date:
bbeyer@purdue.edu writes:
> About 150 million records into the import process, I get the following error:

> ERROR: lock AccessShareLock on object 51533/51769/0 is already held

What PG version?  Can you put together a self-contained test case?
(It's probably independent of the data, so you could make a script that
just processes a lot of dummy data the same way you are doing.)

There was a similar report a couple of weeks ago:
http://archives.postgresql.org/pgsql-bugs/2008-08/msg00264.php
http://archives.postgresql.org/pgsql-bugs/2008-09/msg00021.php
but that person hasn't gotten back to us on determining exactly
what happened.

            regards, tom lane