Thanks Craig. Yes I already tried it but it didn't work. I don't see any solution other than fixing this bug. Take a look http://www.postgresql.org/search/?m=1&q=LIMIT&l=8&d=365&s=r. There are too many bug reports about LIMIT slowing down queries. Let's hope it will be fixed someday :)
Date: Fri, 16 Nov 2012 08:32:24 -0800 Subject: Re: [PERFORM] PostgreSQL strange query plan for my query From: cjames@emolecules.com To: dato0011@hotmail.com CC: pgsql-performance@postgresql.org
On Fri, Nov 16, 2012 at 3:40 AM, David Popiashvili <dato0011@hotmail.com> wrote:
I have database with few hundred millions of rows. I'm running the following query:
select*from"Payments"as p
innerjoin"PaymentOrders"as po
on po."Id"= p."PaymentOrderId"innerjoin"Users"as u
On u."Id"= po."UserId"INNERJOIN"Roles"as r
on u."RoleId"= r."Id"Where r."Name"='Moses'
LIMIT 1000
When the where clause finds a match in database, I get the result in several milliseconds, but if I modify the query and specify a non-existent r."Name" in where clause, it takes too much time to complete. I guess that PostgreSQL is doing a sequential scan on the Payments table (which contains the most rows), comparing each row one by one.
Isn't postgresql smart enough to check first if Roles table contains any row with Name'Moses'?
Roles table contains only 15 row, while Payments contains ~350 million
You probably checked this already, but just in case you didn't ... did you do an "analyze" on the small table? I've been hit by this before ... it's natural to think that Postgres would always check a very small table first no matter what the statistics are. But it's not true. If you analyze the small table, even if it only has one or two rows in it, it will often radically change the plan that Postgres chooses.