Speed of locating tables? - Mailing list pgsql-general

From Steve Wampler
Subject Speed of locating tables?
Date
Msg-id 392E87B6.32710D1F@noao.edu
Whole thread Raw
Responses Re: Speed of locating tables?
Re: Speed of locating tables?
List pgsql-general
Given the name of a table, how quickly can postgresql
locate and access the table?  How does the performance
compare (ballpark estimate) with just using flat files
to represent the data in each table?

I have a problem where the (to me) most natural solution
is to create a large number of small tables.  A new solar
telescope/instrument set we're building needs to share
configuration information (sets of attribute (name-value
pairs)) across a distributed environment, plus retain these
sets for possible reuse.  Typically, there will be 10-30
thousand of these sets created each day.  Each set has
associated with it a unique id string.   When an
attribute set is needed, it is needed quickly - every 1/5
of a second or so a request will be made of the system
that will require access to one of the sets - this request
will be via the id string, never by any more complex scheme.

To me, the most natural way to encode the sets is to
create a separate table for each set, since the attributes
can then be indexed and referenced quickly once the table
is accessed.  But I don't know how fast PG is at locating
a table, given its name.

So, to refine the question - given a DB with (say) 100,000
tables, how quickly can PG access a table given its name?

Thanks!  I'm also open to suggestions on other ways to
represent the data that would provide better access
performance - you can probably tell I'm new to the world of
databases.

--
Steve Wampler-  SOLIS Project, National Solar Observatory
swampler@noao.edu

pgsql-general by date:

Previous
From: "Richard J. Kuhns"
Date:
Subject: createdb -- alternate locations
Next
From: "Matthias Urlichs"
Date:
Subject: Re: Speed of locating tables?