Re: Re: Speed of locating tables - Mailing list pgsql-general

From Ron Peterson
Subject Re: Re: Speed of locating tables
Date
Msg-id 39366A04.44D9136@yellowbank.com
Whole thread Raw
In response to Re: Speed of locating tables  ("carl garland" <carlhgarland@hotmail.com>)
List pgsql-general
Jurgen Defurne wrote:
>
> carl garland wrote:
>
> > >  Don't even think about 100000 separate tables in a database :-(.    It's
> > >not so much that PG's own datastructures wouldn't cope,    as that    very
> > >few Unix filesystems can cope with 100000 files    in a directory.    You'd
> > >be killed on directory search times.

> > I understand the concern for directory search times but what if your
> > partition for the db files is under XFS or some other journaling fs that
> > allows for very quick search times on large directories.  I also
> > saw that there may be concern over PGs own datastructures in that the
> > master tables that hold the table and index tables requires a seq
> > search for locating the tables.  Why support a large # of tables in PG
> > if after a certain limit causes severe performance concerns.  What if
> > your data model requires more 1,000,000 tables?
> >
>
> If the implementation is like above, there is much less concern with directory
> search times, although a directory might get fragmented and be spread out
> across the disk (with 1000000+ tables it will be fragmented).

If the filesystem uses block allocation.  If the filesystem uses
extent-based allocation this wouldn't be a concern.

(I'm no expert on filesystems.  Moshe Bar just happened to write an
article on filesystems in this month's Byte - www.byte.com).

> ... With the directory search above
> deleted, you still have to search your inode table.

Which could be enormous.  Yuck.

Are there clever ways of managing huge numbers of inodes?

-Ron-

pgsql-general by date:

Previous
From: Ed Loehr
Date:
Subject: Re: PostgreSQL capabilities
Next
From: Peter Landis
Date:
Subject: ALTERING A TABLE