Re: Optimizations - Mailing list pgsql-general

From Ogden
Subject Re: Optimizations
Date
Msg-id 26852C9E-8489-4FDD-B9AB-FA5642BBC4AD@darkstatic.com
Whole thread Raw
In response to Re: Optimizations  (Craig Ringer <craig@postnewspapers.com.au>)
Responses Re: Optimizations  (Craig Ringer <craig@postnewspapers.com.au>)
List pgsql-general
On Mar 5, 2010, at 2:26 AM, Craig Ringer wrote:

> Ogden wrote:
>> We run a student scoring system with PostgreSQL as a backend. After the results for each student are inputted into
thesystem, we display many reports for them. We haven't had a problem with efficiency or speed, but it has come up that
perhapsstoring the rolled up scores of each student may be better than calculating their score on the fly. I have
alwayscoded the SQL to calculate on the fly and do not see any benefit from calculating on the fly. For a test with
over100 questions and with 950 students having taken it, it calculates all their relevant score information in less
thanhalf a second. Would there be any obvious benefit to caching the results? 
>
> Caching the results would mean storing the same information in two
> places (individual scores, and aggregates calculated from them). That's
> room for error if they're permitted to get out of sync in any way for
> any reason. For that reason, and because it's complexity you don't need,
> I'd avoid it unless I had a reason not to.
>
> On the other hand if you expect the number of students you have to
> report on to grow vastly then it's worth considering.
>
> If you do go ahead with it, first restructure all queries that use that
> information so they go view a view that calculates that data on the fly.
>
> Then look at replacing that view with a table that's automatically
> updated by triggers when the data source is updated (say, a student has
> a new score recorded).

Craig,

Thank you for the response and insight.

While it sounds good in practice, I know storing the results will vastly increase the size (the table holding the
resultsis over 5Gb in one case) and calculating results from it takes not more than a second for a huge data set.  

Would searching a huge table be as fast as calculating or about the same? I'll have to run some tests on my end but I
amvery impressed by the speed of which PostgreSQL executes aggregate functions.  

Do you suggest looking at this option when we see the reporting to slow down? At that point do you suggest we go back
tothe drawing board? 

Thank you

Ogden

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Restore Data Encountered the ERROR: literal carriage return found in data Error
Next
From: Chris Roffler
Date:
Subject: Re: Xpath Index in PostgreSQL