Thread: Max number of tables in a db?

Max number of tables in a db?

From
bpalmer
Date:
The web site lists a lot of the limitations for postgresql,  but not the
max number of tables that are allowed in a given db.  Anyone care to come
up with a number?

- Brandon


----------------------------------------------------------------------------
 b. palmer,  bpalmer@crimelabs.net            pgp:crimelabs.net/bpalmer.pgp5


Re: Max number of tables in a db?

From
Thomas Lockhart
Date:
> The web site lists a lot of the limitations for postgresql,  but not the
> max number of tables that are allowed in a given db.  Anyone care to come
> up with a number?

It is limited by your filesystem's ability to handle large directories.
I have noticed in the past (not with PostgreSQL per se) that ~3000 files
in a directory is noticably slower that a typically small directory.

afaik that is the only real limitation, so you can try some timing and
performance tests using, for example, a perl script to create and access
a large number of files.

                      - Thomas

Re: Max number of tables in a db?

From
Tom Lane
Date:
Thomas Lockhart <lockhart@fourpalms.org> writes:
>> The web site lists a lot of the limitations for postgresql,  but not the
>> max number of tables that are allowed in a given db.  Anyone care to come
>> up with a number?

> It is limited by your filesystem's ability to handle large directories.
> I have noticed in the past (not with PostgreSQL per se) that ~3000 files
> in a directory is noticably slower that a typically small directory.

A couple of further notes --- there are Unix filesystems that don't suck
with large directories, but I'm not sure whether any of the ones in
common use have smart directory handling.  The typical approach is that
file opens, creates, deletes require a linear scan of the directory.

Depending on your Postgres usage, this may or may not pose any
noticeable performance problem for you, even with thousands of tables in
the DB.  A backend that's only accessing a small-to-moderate number of
tables will open them only once per session, so in a lot of scenarios
the fact that the OS is a tad slow to open the files won't add up to
anything much.

But still, you'd probably have performance problems once you got to the
range of hundreds of thousands of tables in a database.

Before 7.1, each "large object" counted as a separate table, so it
wasn't too hard to hit this range even with a simple DB design.  We don't
do that anymore, though.

            regards, tom lane

Re: Max number of tables in a db?

From
"Colin 't Hart"
Date:
> A couple of further notes --- there are Unix filesystems that don't suck
> with large directories, but I'm not sure whether any of the ones in
> common use have smart directory handling.  The typical approach is that
> file opens, creates, deletes require a linear scan of the directory.

Linux (kernel) 2.2.x or later (including 2.4.x) have a directory cache
(I think hashed, but could also be btree) which means that subsequent
file opens are (very) fast, and very large directories are not a problem,
provided the cache doesn't age sufficiently to be discarded.


How does Postgres do its file handling? How many files can it have open
simultaneously?


Cheers,

Colin