Re: Performance question 83 GB Table 150 million rows, distinct select - Mailing list pgsql-performance

From Tory M Blue
Subject Re: Performance question 83 GB Table 150 million rows, distinct select
Date
Msg-id CAEaSS0azUnqfJewznuaOtKkMyFpeJSkn7O7-i4+BmS6XSnDGcg@mail.gmail.com
Whole thread Raw
In response to Re: Performance question 83 GB Table 150 million rows, distinct select  (Josh Berkus <josh@agliodbs.com>)
List pgsql-performance
On Wed, Nov 16, 2011 at 9:19 PM, Josh Berkus <josh@agliodbs.com> wrote:
> Tory,
>
> A seq scan across 83GB in 4 minutes is pretty good.   That's over
> 300MB/s.  Even if you assume that 1/3 of the table was already cached,
> that's still over 240mb/s.  Good disk array.
>
> Either you need an index, or you need to not do this query at user
> request time.  Or a LOT more RAM.

Thanks josh,

That's also the other scenario, what is expected, maybe the 4 minutes
which turns into 5.5 hours or 23 hours for a report is just standard
based on our data and sizing.

Then it's about stopping the chase and start looking at tuning or
redesign if possible to allow for reports to finish in a timely
fashion. The data is going to grow a tad still, but reporting
requirements are on the rise.

You folks are the right place to seek answers from, I just need to
make sure I'm giving you the information that will allow you to
assist/help me.

Memory is not expensive these days, so it's possible that i bump the
server to the 192gb or whatever to give me the headroom, but we are
trying to dig a tad deeper into the data/queries/tuning before I go
the hardware route again.

Tory

pgsql-performance by date:

Previous
From: Tory M Blue
Date:
Subject: Re: Performance question 83 GB Table 150 million rows, distinct select
Next
From: Aidan Van Dyk
Date:
Subject: Re: Performance question 83 GB Table 150 million rows, distinct select