Re: Configuration tips for very large database - Mailing list pgsql-performance

From Claudio Freire
Subject Re: Configuration tips for very large database
Date
Msg-id CAGTBQpb4_sk1hebZh3pekL=QxgMJCFOC-nxE_LW5_usSWrBjPQ@mail.gmail.com
Whole thread Raw
In response to Re: Configuration tips for very large database  (Kevin Grittner <kgrittn@ymail.com>)
Responses Re: Configuration tips for very large database  (Nico Sabbi <nicola.sabbi@poste.it>)
List pgsql-performance
On Thu, Feb 12, 2015 at 7:38 PM, Kevin Grittner <kgrittn@ymail.com> wrote:
> Nico Sabbi <nicola.sabbi@poste.it> wrote:
>
>> Queries get executed very very slowly, say 20 minutes.
>
>> I'd like to know if someone has already succeeded in running
>> postgres with 200-300M records with queries running much faster
>> than this.
>
> If you go to the http://wcca.wicourts.gov/ web site, bring up any
> case, and click the "Court Record Events" button, it will search a
> table with hundreds of millions of rows.  The table is not
> partitioned, but has several indexes on it which are useful for
> queries such as the one that is used when you click the button.

I have a table with ~800M rows, wide ones, that runs reporting queries
quite efficiently (usually seconds).

Of course, the queries don't traverse the whole table. That wouldn't
be efficient. That's probably the key there, don't make you database
process the whole thing every time if you expect it to be scalable.

What kind of queries are you running that have slowed down?

Post an explain analyze so people can diagnose. Possibly it's a
query/indexing issue rather than a hardware one.


pgsql-performance by date:

Previous
From: "Mathis, Jason"
Date:
Subject: Re: Configuration tips for very large database
Next
From: Paul Callaghan
Date:
Subject: Re: query - laziness of lateral join with function