Pedro Alves <pmalves@think.pt> writes:
> 1. I run the same query (select blah ... order by foo limit bar) in 2
> "virtualy" identical machines, both having postgres v7.3.2. The database is
> the same (the amount of data is a bit diferent) and machine A has (much)
> more shared buffers than Machine B; postgres uses indexes in B but not in
> A. If I change the limit from 200 to 100, machine A starts using indexes.
> In machine B, the optimizer only stops using indexes in limit 800. Why does
> this happen? Is there any memory parameter that controles this behaviour?
Have you vacuum analyzed recently on both machines?
shared_buffers doesn't affect the estimated cost of an indexscan.
effective_cache_size does, also random_page_cost, but you didn't mention
having touched those.
> 3. I have a composite index in columns foo and bar and an index in foo. I
> noticed that making a query such as select * from table where foo=1 and
> bar=2, postgres correctly uses foo_bar_idx. But if I use select * from
> table where foo=1 and bar IN (1,2), posgtres uses foo_idx, having much more
> inneficiency.
Presently, you'd need an index on (bar,foo) to get a good plan for a
query expressed that way.
regards, tom lane