Thread: BUG #6256: loading small file eats all memory and crashes system

BUG #6256: loading small file eats all memory and crashes system

From
"Rachael"
Date:
The following bug has been logged online:

Bug reference:      6256
Logged by:          Rachael
Email address:      rulonrasp@gmail.com
PostgreSQL version: 9.11
Operating system:   Arch  Linux
Description:        loading small file eats all memory and crashes system
Details:

While attempting to import a sql file, postgres eats all my physical memory
(4GB) and swap file space(2GB).

The file itself is a series of insert statements that is 10,000 rows by 100
columns wide.  A smaller file (1000 rows) loads just fine.  I'll be glad to
provide sample files.  The command I'm using is inside a perl script:

print `psql $database $user < postgres.sql`;

When the computer runs out of memory the process finally dies.  The file
itself is about 32MB in size.

Even if this import needs a performance tweak in the conf file, this
shouldn't exhaust all of the memory in the system before exiting.

Log Output:

LOG:  unexpected EOF on client connection
LOG:  server process (PID 5280) was terminated by signal 9: Killed
LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.
WARNING:  terminating connection because of crash of another server process

Re: BUG #6256: loading small file eats all memory and crashes system

From
Alex Hunsaker
Date:
On Fri, Oct 14, 2011 at 21:43, Rachael <rulonrasp@gmail.com> wrote:
>
> The following bug has been logged online:
>
> Bug reference: =C2=A0 =C2=A0 =C2=A06256
> Logged by: =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Rachael
> Email address: =C2=A0 =C2=A0 =C2=A0rulonrasp@gmail.com
> PostgreSQL version: 9.11
> Operating system: =C2=A0 Arch =C2=A0Linux
> Description: =C2=A0 =C2=A0 =C2=A0 =C2=A0loading small file eats all memor=
y and crashes system
> Details:
>
> While attempting to import a sql file, postgres eats all my physical memo=
ry
> (4GB) and swap file space(2GB).
>
> The file itself is a series of insert statements that is 10,000 rows by 1=
00
> columns wide. =C2=A0A smaller file (1000 rows) loads just fine. =C2=A0I'l=
l be glad to
> provide sample files. =C2=A0The command I'm using is inside a perl script:
>
> print `psql $database $user < postgres.sql`;

What happens when you remove perl from the equation? Or at  the very
least while doing the import have you tired looking a top or some
other tool to see where the memory is going?

Re: BUG #6256: loading small file eats all memory and crashes system

From
Alex Hunsaker
Date:
On Sat, Oct 15, 2011 at 13:40, Rachael Nelson <rulonrasp@gmail.com> wrote:
> When I run this it does the same:
>
> psql test dev < postgres.sql
>
> the db is named test, connecting with the dev user.

I assume you meant to hit reply all :-)

If you can provide a self contained test case ideally in the form of
an .sql file that would help. The table definition at the very least
is a must.

Re: BUG #6256: loading small file eats all memory and crashes system

From
Alex Hunsaker
Date:
On Sat, Oct 15, 2011 at 14:16, Rachael Nelson <rulonrasp@gmail.com> wrote:
> Attached is a tiny bit of top output (top.txt), the file being imported
> (postgres.sql.gz - about 32MB), and the perl module script that generates

Thanks. So the problem here seems to be postgres does not handle a
multi-hundred megabyte single statement very well. In-fact you don't
even need a table, I was able to run it out of memory using  a 306MB
single insert with the multirow values syntax.

I would suggest using COPY FROM instead, it should be much faster. If
you can't do that then try breaking up that big insert into multiple
smaller ones.


For the curious, find below the memory dump. Im not exactly sure where
all the memory is going, I count ~1.5G below. This box has 8G total
with ~7 Free before I ran the query.

TopMemoryContext: 61360 total in 7 blocks; 3920 free (13 chunks); 57440 used
  TopTransactionContext: 8192 total in 1 blocks; 7520 free (0 chunks); 672 used
  MessageContext: 1487668504 total in 87 blocks; 2256 free (1 chunks);
1487666248 used
  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  smgr relation table: 24576 total in 2 blocks; 13920 free (4 chunks);
10656 used
  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0
chunks); 32 used
  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  Relcache by OID: 24576 total in 2 blocks; 14912 free (3 chunks); 9664 used
  CacheMemoryContext: 817840 total in 20 blocks; 111704 free (0
chunks); 706136 used
    .....<snip>
  MdSmgr: 8192 total in 1 blocks; 7584 free (0 chunks); 608 used
  LOCALLOCK hash: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
  Timezones: 83472 total in 2 blocks; 3744 free (0 chunks); 79728 used
  ErrorContext: 8192 total in 1 blocks; 8160 free (3 chunks); 32 used

Re: BUG #6256: loading small file eats all memory and crashes system

From
Rachael Nelson
Date:
Copy From worked great with csv formatting.  Thank you very much for your
help!  I recently started with postgres after working heavily with MySql for
corporate environments.  The reason for the script was to test the
performance of a SQL command that was taking forever in MySQL.

SELECT id,employer_id, employee_id
FROM giantTable
WHERE id IN(SELECT
            MAX(id)
            FROM giantTable
            GROUP BY employer_id, employee_id)
AND to_number(status,'9')<=3;

This simple query with a subselect took 8 seconds to pull from that table
with 2000 rows in MySql.  In Postgres it took 3ms!  The same query with
100,000 rows is still running in MySQL (8 minutes and counting...) while in
Postgres it loaded instantly (132ms).  Obviously this can be optimized with
a join instead of subselect but it shows how pitiful MySQL is and how nice
Postgres performs.  This is with out of the box parameters that could
probably be tweaked for better performance but even then, Postgres is
kicking  the pants out of MySQL.  I'm very impressed - yall have a great
product!!

On Sat, Oct 15, 2011 at 6:33 PM, Alex Hunsaker <badalex@gmail.com> wrote:

> On Sat, Oct 15, 2011 at 14:16, Rachael Nelson <rulonrasp@gmail.com> wrote:
> > Attached is a tiny bit of top output (top.txt), the file being imported
> > (postgres.sql.gz - about 32MB), and the perl module script that generates
>
> Thanks. So the problem here seems to be postgres does not handle a
> multi-hundred megabyte single statement very well. In-fact you don't
> even need a table, I was able to run it out of memory using  a 306MB
> single insert with the multirow values syntax.
>
> I would suggest using COPY FROM instead, it should be much faster. If
> you can't do that then try breaking up that big insert into multiple
> smaller ones.
>
>
> For the curious, find below the memory dump. Im not exactly sure where
> all the memory is going, I count ~1.5G below. This box has 8G total
> with ~7 Free before I ran the query.
>
> TopMemoryContext: 61360 total in 7 blocks; 3920 free (13 chunks); 57440
> used
>  TopTransactionContext: 8192 total in 1 blocks; 7520 free (0 chunks); 672
> used
>  MessageContext: 1487668504 total in 87 blocks; 2256 free (1 chunks);
> 1487666248 used
>  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
> used
>  smgr relation table: 24576 total in 2 blocks; 13920 free (4 chunks);
> 10656 used
>  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0
> chunks); 32 used
>  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
>  PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used
>  Relcache by OID: 24576 total in 2 blocks; 14912 free (3 chunks); 9664 used
>  CacheMemoryContext: 817840 total in 20 blocks; 111704 free (0
> chunks); 706136 used
>    .....<snip>
>  MdSmgr: 8192 total in 1 blocks; 7584 free (0 chunks); 608 used
>  LOCALLOCK hash: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
>  Timezones: 83472 total in 2 blocks; 3744 free (0 chunks); 79728 used
>  ErrorContext: 8192 total in 1 blocks; 8160 free (3 chunks); 32 used
>