yes, I also thought of this method and tested it before I got your mail and this solution seems workable.
Thanks for the help
On Feb 12, 2008 9:18 PM, Tom Lane <
tgl@sss.pgh.pa.us> wrote:
> Analyzing did not help, here is the out of EXPLAIN ANALYZE of update query
> "Seq Scan on dummy (cost=0.00..56739774.24 rows=23441 width=275) (actual
> time=18.927..577929.014 rows=22712 loops=1)"
> " SubPlan"
> " -> Aggregate (cost=2420.41..2420.43 rows=1 width=19) (actual time=
> 25.423..25.425 rows=1 loops=22712)"
> " -> Seq Scan on dummy "temp" (cost=0.00..2416.01 rows=586
> width=19) (actual time=0.049..17.834 rows=2414 loops=22712)"
> " Filter: ((product)::text = ($0)::text)"
> "Total runtime: 578968.885 ms"
Yeah, that's just not going to be fast. An index on the product column
might help a bit, but the real issue is that you're repetitively
calculating the same aggregates. I think you need a separate temp
table, along the lines of
create temp table dummy_agg as
select product,
(case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end) as s
from dummy
group by product;
create index dummy_agg_i on dummy_agg(product); -- optional
update dummy
set gp= (select s from dummy_agg where dummy_agg.product = dummy.product);
The index would only be needed if you expect a lot of rows (lot of
different product values).
regards, tom lane