On Mon, 2002-09-09 at 07:13, Bruce Momjian wrote:
>
> OK, turns out that the loop for sequential scan ran fewer times and was
> skewing the numbers. I have a new version at:
>
> ftp://candle.pha.pa.us/pub/postgresql/randcost
Latest version:
olly@linda$
random test: 14
sequential test: 11
null timing test: 9
random_page_cost = 2.500000
olly@linda$ for a in 1 2 3 4 5
> do
> ~/randcost
> done
Collecting sizing information ...
random test: 11
sequential test: 11
null timing test: 9
random_page_cost = 1.000000
random test: 11
sequential test: 10
null timing test: 9
random_page_cost = 2.000000
random test: 11
sequential test: 11
null timing test: 9
random_page_cost = 1.000000
random test: 11
sequential test: 10
null timing test: 9
random_page_cost = 2.000000
random test: 10
sequential test: 10
null timing test: 10
Sequential time equals null time. Increase TESTCYCLES and rerun.
Available memory (512M) exceeds the total database size, so sequential
and random are almost the same for the second and subsequent runs.
Since, in production, I would hope to have all active tables permanently
in RAM, would there be a case for my using a page cost of 1 on the
assumption that no disk reads would be needed?
--
Oliver Elphick Oliver.Elphick@lfix.co.uk
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C
======================================== "Draw near to God and he will draw near to you. Cleanse your hands,
yousinners; and purify your hearts, you double minded." James 4:8