Re: Maximum number of tables - Mailing list pgsql-general

From Tom Lane
Subject Re: Maximum number of tables
Date
Msg-id 631.1300209340@sss.pgh.pa.us
Whole thread Raw
In response to Maximum number of tables  (Manos Karpathiotakis <mk@di.uoa.gr>)
Responses Re: Maximum number of tables  (Manos Karpathiotakis <mk@di.uoa.gr>)
List pgsql-general
Manos Karpathiotakis <mk@di.uoa.gr> writes:
> Hi all,
> I am using postgres 9.0 under CentOS 5.5 (Dual Xeon Quad Core @ 2,44 GHz,
> 64GB RAM, 2TB RAID 5). In my case, postgres is used as a backend for the RDF
> store Sesame.
> I am trying to store a dataset that consists of approximately 900.000.000
> insertions (organized in either 10 tables, or in an arbitrary number of
> tables exceeding 2000). Loading the data in 10 tables takes about two days,
> while in the 2nd case, an org.postgresql.util.PSQLException: ERROR: out of
> shared memory error is thrown.

I'm guessing that you're seeing this through some client-side code that
helpfully suppresses the HINT about raising max_locks_per_transaction
:-(

If your application tries to touch all 2000 tables in one transaction,
you will need to raise that parameter to avoid running out of locktable
space for the AccessShareLocks it needs on all those tables.

However ... generally speaking, I'd guess that whatever you did to
refactor 10 tables into 2000 was a bad idea --- one table with an extra
key column is almost always a better design than N basically-identical
tables.  The latter will be a *lot* harder to use, and probably won't
fix your performance problem anyway.  You should attack the performance
problem in a different way.  Have you read
http://www.postgresql.org/docs/9.0/static/populate.html ?

            regards, tom lane

pgsql-general by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: PostgreSQL for Holdem Manager could not be installed.
Next
From: Adrian Klaver
Date:
Subject: Re: How to add hosts to pg_hba.conf and postgresql.conf?