Re: max_files_per_process limit - Mailing list pgsql-admin

From Scott Marlowe
Subject Re: max_files_per_process limit
Date
Msg-id dcc563d10811111059p7dc52b73w225815315020eda7@mail.gmail.com
Whole thread Raw
In response to Re: max_files_per_process limit  ("Dilek Küçük" <dilekkucuk@gmail.com>)
List pgsql-admin
On Tue, Nov 11, 2008 at 5:10 AM, Dilek Küçük <dilekkucuk@gmail.com> wrote:
>
> On Mon, Nov 10, 2008 at 4:51 PM, Achilleas Mantzios
> <achill@matrix.gatewaynet.com> wrote:
>>
>> Στις Monday 10 November 2008 16:18:37 ο/η Dilek Küçük έγραψε:
>> > Hi,
>> >
>> > We have a database of about 62000 tables (about 2000 tablespaces) with
>> > an
>> > index on each table. Postgresql version is 8.1.
>> >
>>
>> So you have about 62000 distinct schemata in your db?
>> Imagine that the average enterprise has about 200 tables max,
>> and an average sized country has about 300 such companies,
>> including public sector, with 62000 tables you could blindly model
>> .... the whole activity of a whole country.
>>
>> Is this some kind of replicated data?
>> Whats the story?
>
> Actually we had 31 distinct tables but this amounted to tens of billions of
> records (streaming data from 2000 sites) per table a year, so we
> horizontally partition each table into 2000 tables. This allowed us to
> discard one of the indexes that we have created and freed us from periodical
> cluster operations which turned out to be infeasible for a system with tight
> querying constraints in terms of time.

Any chance of combining less used tables back together to reduce the
number of them?  I'd also look at using more schemas and fewer
tablespaces.  Just a thought.

pgsql-admin by date:

Previous
From: NetGraviton
Date:
Subject: Web Application Engineer - Drupal, PHP, CSS, JavaScript, Postgresql
Next
From: "Scott Marlowe"
Date:
Subject: Re: Ideal way to upgrade to postgres 8.3 with less downtime