Thread: pg_dump: NOTICE: ShmemAlloc: out of memory

pg_dump: NOTICE: ShmemAlloc: out of memory

From
Brad Paul
Date:
I'm trying to dump a database called stocks. I did:

pg_dump stocks > stocks.sql
pg_dump: NOTICE:  ShmemAlloc: out of memory pg_dump: Attempt to lock table
"slab" failed.  ERROR:  LockAcquire: holder table out of memory

I'm using postgresql-7.2.3-5.73 on RedHat 7.3

I'm trying to do some calculations on historical stock information. I have
written a Perl scrip that fills my database with historical stock
information off the web. This database has 3478 tables. One table for each
stock ticker I'm interested in. Each table has up to 3000 rows.

When I do my calculations I need stock info for many tickers on one date
and then the next date etc.

The fact that pg_dump crashes when I try and dump that data concerns me.

Have I set up a very poor database? Would it be better for me to put all
of my data in to one table? If the data is all in one table will I need to
do a select for each date and ticker on a table with 3478*3000 rows.

Re: pg_dump: NOTICE: ShmemAlloc: out of memory

From
Tom Lane
Date:
Brad Paul <bradpaul.a.with@long.tail.bdp-optics.com> writes:
> I'm trying to dump a database called stocks. I did:
> pg_dump stocks > stocks.sql
> pg_dump: NOTICE:  ShmemAlloc: out of memory pg_dump: Attempt to lock table
> "slab" failed.  ERROR:  LockAcquire: holder table out of memory

> This database has 3478 tables.

You probably need to increase max_locks_per_transaction.  Or better,
rethink your schema.  Thousands of tiny tables is usually a bad idea
compared to a smaller number of (properly indexed) tables.

            regards, tom lane


Re: pg_dump: NOTICE: ShmemAlloc: out of memory

From
Mike Mascari
Date:
Tom Lane wrote:
> Brad Paul <bradpaul.a.with@long.tail.bdp-optics.com> writes:
>
>>I'm trying to dump a database called stocks. I did:
>>pg_dump stocks > stocks.sql
>>pg_dump: NOTICE:  ShmemAlloc: out of memory pg_dump: Attempt to lock table
>>"slab" failed.  ERROR:  LockAcquire: holder table out of memory
>
>
>>This database has 3478 tables.
>
>
> You probably need to increase max_locks_per_transaction.  Or better,
> rethink your schema.  Thousands of tiny tables is usually a bad idea
> compared to a smaller number of (properly indexed) tables.

The number 3478 is coincidentally close to the number of NYSE
listed stocks (which drops a few here, picks up a few there)
over time. Although, as of yesterday, it was 3611. Hmmm...sounds
suspicious... :-)

Mike Mascari
mascarm@mascari.com