Re: Large number of tables slow insert - Mailing list pgsql-performance

From Scott Marlowe
Subject Re: Large number of tables slow insert
Date
Msg-id dcc563d10808260829s6d397b7egd364589c9c5b16b1@mail.gmail.com
Whole thread Raw
In response to Re: Large number of tables slow insert  (Matthew Wakeling <matthew@flymine.org>)
List pgsql-performance
On Tue, Aug 26, 2008 at 6:50 AM, Matthew Wakeling <matthew@flymine.org> wrote:
> On Sat, 23 Aug 2008, Loic Petit wrote:
>>
>> I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount
>> of sensors. In order to have good
>> performances on querying by timestamp on each sensor, I partitionned my
>> measures table for each sensor. Thus I create
>> a lot of tables.
>
> As far as I can see, you are having performance problems as a direct result
> of this design decision, so it may be wise to reconsider. If you have an
> index on both the sensor identifier and the timestamp, it should perform
> reasonably well. It would scale a lot better with thousands of sensors too.

Properly partitioned, I'd expect one big table to outperform 3,000
sparsely populated tables.

pgsql-performance by date:

Previous
From: Matthew Wakeling
Date:
Subject: Re: Large number of tables slow insert
Next
From: Jerry Champlin
Date:
Subject: Autovacuum does not stay turned off