Lincoln Yeoh <lyeoh@pop.jaring.my> writes:
> If my O/S has a cache of say 1GB and my DB is < 1GB and is totally in cache
> would setting effective_cache_size to 1GB make the optimizer decide on
> index usage just as setting random_page_cost to 1?
I don't feel like going through the equations at the moment (it's open
source, read for yourself) but certainly if table+index are less than
effective_cache_size the cost estimate should be pretty low.
> If random page cost is high but so is effective_cache_size does postgresql
> use sequential scans first time round and then index scans second time
> round if everything cached?
No, there is no notion of "first time round" vs "second time round".
> But the main thing is: is it hard for the optimizer to tell whether a
> DB/table/index is completely in effective_cache_size?
It knows the table & index size as last recorded by VACUUM. This might
not match up with current reality, of course...
regards, tom lane