On Wed, 15 Oct 2003, Mike Leahy wrote:
> I increased those variables you suggested, and that seems to have increased
> the memory allocated to the process in windows.
>
> Also, I tried the same query I was using, but with some actual values
> specified in the where statement - that got it to use the indexes. The only
> thing is, I would normally be joining such a statement to another table, in
> which case there wouldn't be a where statement. I don't think that it uses
> indexes in that case, even if the number of rows being used are a fraction of
> what's in the table.
You'll need to try it and give explain (analyze) results because joins
give their own chances for indexes to be used, so I wouldn't want to
speculate without more info.
One other thing to try is to set enable_seqscan=off; before running the
explain analyze and compare the results to when you haven't done the set.
That gives a large cost disbenefit to choosing a seqscan. In the original
query it's entirely possible that this will be more expensive. If it
isn't (or it's really close), you may want to also look into lowering
random_page_cost in the postgresql.conf file.
> Regarding the vacuum results, here they are:
>
> INFO: --Relation public.tbl_censo_poblacion_1993--
> INFO: Pages 283669: Changed 0, Empty 0; Tup 2553015: Vac 0, Keep 0, UnUsed 0.
> 150106 Total CPU 5.89s/2.90u sec elapsed 56.52 sec.
> VACUUM6
>
> What do you make of these results?
Well, that'd imply that the records are about 1k a piece on average, or
you have dead space in the table. I'm still thinking that 189s to read
284k pages is a bit much (about 1.5x the time on an equivalent number of
pages on my not optimized dev box), but I don't know how good your
hardware is and you are running in cygwin which probably doesn't help.