Re: Decreasing BLKSZ - Mailing list pgsql-performance

From Bucky Jordan
Subject Re: Decreasing BLKSZ
Date
Msg-id 78ED28FACE63744386D68D8A9D1CF5D4209ADC@MAIL.corp.lumeta.com
Whole thread Raw
In response to Re: Decreasing BLKSZ  ("Marc Morin" <marc@sandvine.com>)
Responses Re: Decreasing BLKSZ  ("Marc Morin" <marc@sandvine.com>)
List pgsql-performance
> > The bottom line here is likely to be "you need more RAM" :-(
>
> Yup.  Just trying to get a handle on what I can do if I need more than
> 16G
> Of ram... That's as much as I can put on the installed based of
> servers.... 100s of them.
>
> >
> > I wonder whether there is a way to use table partitioning to
> > make the insert pattern more localized?  We'd need to know a
> > lot more about your insertion patterns to guess how, though.
> >
> >             regards, tom lane
>
> We're doing partitioning as well.....
> >
I'm guessing that you basically have a data collection application that
sends in lots of records, and a reporting application that wants
summaries of the data? So, if I understand the problem correctly, you
don't have enough ram (or may not in the future) to index the data as it
comes in.

Not sure how much you can change the design, but what about either
updating a summary table(s) as the records come in (trigger, part of the
transaction, or do it in the application) or, index periodically? In
otherwords, load a partition (say a day's worth) then index that
partition all at once. If you're doing real-time analysis that might not
work so well though, but the summary tables should.

I assume the application generates unique records on its own due to the
timestamp, so this isn't really about checking for constraint
violations? If so, you can probably do away with the index on the tables
that you're running the inserts on.

- Bucky

pgsql-performance by date:

Previous
From: Tobias Brox
Date:
Subject: Merge Join vs Nested Loop
Next
From: "Marc Morin"
Date:
Subject: Re: Decreasing BLKSZ