Thread: Different query plans on same servers

Different query plans on same servers

From
Mario Splivalo
Date:
I have a fairly simple query:

SELECT <some columns>
FROM "tubesite_image"
INNER JOIN "tubesite_object"
    ON ("tubesite_image"."object_ptr_id" = "tubesite_object"."id")
WHERE
    "tubesite_object"."site_id" = 8
ORDER BY
    "tubesite_object"."pub_date" ASC LIMIT 21;



That query is having a bad query plan on production server:

 Limit  (cost=0.00..1938.67 rows=21 width=275) (actual
time=3270.000..3270.000 rows=0 loops=1)
   ->  Nested Loop  (cost=0.00..792824.51 rows=8588 width=275) (actual
time=3269.997..3269.997 rows=0 loops=1)
         ->  Index Scan using tubesite_object_pub_date_idx on
tubesite_object  (cost=0.00..789495.13 rows=9711 width=271) (actual
time=0.011..3243.629 rows=9905 loops=1)
               Filter: (site_id = 8)
         ->  Index Scan using tubesite_image_pkey on tubesite_image
(cost=0.00..0.33 rows=1 width=4) (actual time=0.002..0.002 rows=0
loops=9905)
               Index Cond: (tubesite_image.object_ptr_id =
tubesite_object.id)
 Total runtime: 3270.071 ms

But, when I turn off nested loops, the query flies:


QUERY PLAN


----------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=31384.35..31384.40 rows=21 width=275) (actual
time=37.988..37.988 rows=0 loops=1)
   ->  Sort  (cost=31384.35..31405.82 rows=8588 width=275) (actual
time=37.986..37.986 rows=0 loops=1)
         Sort Key: tubesite_object.pub_date
         Sort Method:  quicksort  Memory: 25kB
         ->  Hash Join  (cost=857.00..31152.80 rows=8588 width=275)
(actual time=37.968..37.968 rows=0 loops=1)
               Hash Cond: (tubesite_object.id =
tubesite_image.object_ptr_id)
               ->  Bitmap Heap Scan on tubesite_object
(cost=596.77..30685.30 rows=9711 width=271) (actual time=7.414..25.132
rows=9905 loops=1)
                     Recheck Cond: (site_id = 8)
                     ->  Bitmap Index Scan on tubesite_object_site_id
(cost=0.00..594.34 rows=9711 width=0) (actual time=4.943..4.943
rows=9905 loops=1)
                           Index Cond: (site_id = 8)
               ->  Hash  (cost=152.88..152.88 rows=8588 width=4) (actual
time=4.620..4.620 rows=8588 loops=1)
                     ->  Seq Scan on tubesite_image  (cost=0.00..152.88
rows=8588 width=4) (actual time=0.005..2.082 rows=8588 loops=1)
 Total runtime: 38.071 ms


I have rsynced the database from the prod server to the test server,
that has same configuration (shared buffers, work mem, estimated cache
size, and so on), and there it chooses bitmap heap scan with hash join
without disabling the nested loops.

I have 8.4.8 on producion and 8.4.9 on test, could that explain the
difference in plans chosen?


Re: Different query plans on same servers

From
Tom Lane
Date:
Mario Splivalo <mario.splivalo@megafon.hr> writes:
> I have 8.4.8 on producion and 8.4.9 on test, could that explain the
> difference in plans chosen?

I'd wonder first if you have the same statistics settings on both.
The big problem here is that the estimation of the join size is bad
(8588 versus 0).

            regards, tom lane

Re: Different query plans on same servers

From
"Kevin Grittner"
Date:
Tom Lane <tgl@sss.pgh.pa.us> wrote:

> I'd wonder first if you have the same statistics settings on both.
> The big problem here is that the estimation of the join size is
> bad (8588 versus 0).

But both servers develop that estimate for the join size.  I was
wondering more about whether the costing factors were really the
same:

slow:

   ->  Nested Loop
          (cost=0.00..792824.51 rows=8588 width=275)
          (actual time=3269.997..3269.997 rows=0 loops=1)

versus fast:

         ->  Hash Join
                (cost=857.00..31152.80 rows=8588 width=275)
                (actual time=37.968..37.968 rows=0 loops=1)

The hash join path must look more expensive on the first machine,
for some reason.

Mario, could you post the result of running this query from both
servers?:

http://wiki.postgresql.org/wiki/Server_Configuration

-Kevin

Re: Different query plans on same servers

From
"Kevin Grittner"
Date:
"Kevin Grittner" <Kevin.Grittner@wicourts.gov> wrote:

> But both servers develop that estimate for the join size.

[sigh]  Those *were* both from the production server.  Please show
us the EXPLAIN ANALYZE from the other server.

-Kevin

Re: Different query plans on same servers

From
Mario Splivalo
Date:
On 12/06/2011 09:00 PM, Tom Lane wrote:
> Mario Splivalo <mario.splivalo@megafon.hr> writes:
>> I have 8.4.8 on producion and 8.4.9 on test, could that explain the
>> difference in plans chosen?
>
> I'd wonder first if you have the same statistics settings on both.
> The big problem here is that the estimation of the join size is bad
> (8588 versus 0).

They do, I guess. I did rsync postgres datadir from the prod server to
the test server. The only difference is that prod server was a bit more
loaded than the test server.

    Mario

Re: Different query plans on same servers

From
Mario Splivalo
Date:
On 12/06/2011 09:17 PM, Kevin Grittner wrote:
>
> The hash join path must look more expensive on the first machine,
> for some reason.
>
> Mario, could you post the result of running this query from both
> servers?:
>
> http://wiki.postgresql.org/wiki/Server_Configuration

Sure. Here is from the prod server:

            name             |
  current_setting

-----------------------------+--------------------------------------------------------------------------------------------------------
 version                     | PostgreSQL 8.4.8 on x86_64-pc-linux-gnu,
compiled by GCC gcc-4.3.real (Debian 4.3.2-1.1) 4.3.2, 64-bit
 checkpoint_segments         | 64
 default_statistics_target   | 2000
 effective_cache_size        | 36GB
 external_pid_file           | /var/run/postgresql/8.4-main.pid
 lc_collate                  | en_US.UTF-8
 lc_ctype                    | en_US.UTF-8
 listen_addresses            | *
 log_autovacuum_min_duration | 0
 log_checkpoints             | on
 log_line_prefix             | %t [%p]: [%l-1] [%d]
 log_min_duration_statement  | 1s
 maintenance_work_mem        | 256MB
 max_connections             | 1500
 max_stack_depth             | 3MB
 port                        | 5432
 server_encoding             | UTF8
 shared_buffers              | 4GB
 statement_timeout           | 30min
 temp_buffers                | 4096
 TimeZone                    | localtime
 track_activity_query_size   | 2048
 unix_socket_directory       | /var/run/postgresql
 wal_buffers                 | 128MB
 work_mem                    | 64MB


And here is from the test server:
            name            |
current_setting

----------------------------+------------------------------------------------------------------------------------------------------
 version                    | PostgreSQL 8.4.9 on x86_64-pc-linux-gnu,
compiled by GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit
 checkpoint_segments        | 64
 default_statistics_target  | 2000
 effective_cache_size       | 36GB
 external_pid_file          | /var/run/postgresql/8.4-main.pid
 lc_collate                 | en_US.UTF-8
 lc_ctype                   | en_US.UTF-8
 listen_addresses           | *
 log_connections            | on
 log_disconnections         | on
 log_line_prefix            | %t [%p]: [%l-1] [%d]
 log_min_duration_statement | 0
 maintenance_work_mem       | 256MB
 max_connections            | 40
 max_stack_depth            | 3MB
 port                       | 5432
 server_encoding            | UTF8
 shared_buffers             | 4GB
 ssl                        | on
 temp_buffers               | 4096
 TimeZone                   | localtime
 unix_socket_directory      | /var/run/postgresql
 wal_buffers                | 128MB
 work_mem                   | 64MB
(24 rows)

At the time of doing 'explain analyze' on the prod server there were cca
80 connections on the server.

    Mario

Re: Different query plans on same servers

From
Mario Splivalo
Date:
On 12/06/2011 09:29 PM, Kevin Grittner wrote:
> "Kevin Grittner" <Kevin.Grittner@wicourts.gov> wrote:
>
>> But both servers develop that estimate for the join size.
>
> [sigh]  Those *were* both from the production server.  Please show
> us the EXPLAIN ANALYZE from the other server.

Huh, right... missed that one. Here is the 'explain analyze' from the
other server:



QUERY PLAN


----------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=31531.75..31531.80 rows=21 width=275) (actual
time=45.584..45.584 rows=0 loops=1)
   ->  Sort  (cost=31531.75..31531.84 rows=36 width=275) (actual
time=45.579..45.579 rows=0 loops=1)
         Sort Key: tubesite_object.pub_date
         Sort Method:  quicksort  Memory: 25kB
         ->  Hash Join  (cost=866.34..31530.82 rows=36 width=275)
(actual time=45.544..45.544 rows=0 loops=1)
               Hash Cond: (tubesite_object.id =
tubesite_image.object_ptr_id)
               ->  Bitmap Heap Scan on tubesite_object
(cost=606.11..31146.68 rows=9884 width=271) (actual time=6.861..37.497
rows=9905 loops=1)
                     Recheck Cond: (site_id = 8)
                     ->  Bitmap Index Scan on tubesite_object_site_id
(cost=0.00..603.64 rows=9884 width=0) (actual time=4.792..4.792
rows=9905 loops=1)
                           Index Cond: (site_id = 8)
               ->  Hash  (cost=152.88..152.88 rows=8588 width=4) (actual
time=3.816..3.816 rows=8588 loops=1)
                     ->  Seq Scan on tubesite_image  (cost=0.00..152.88
rows=8588 width=4) (actual time=0.003..1.740 rows=8588 loops=1)
 Total runtime: 45.798 ms




This is also a query from the prod server, but without LIMIT:


----------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=31713.95..31735.42 rows=8588 width=275) (actual
time=60.311..60.311 rows=0 loops=1)
   Sort Key: tubesite_object.pub_date
   Sort Method:  quicksort  Memory: 25kB
   ->  Hash Join  (cost=857.00..31152.80 rows=8588 width=275) (actual
time=60.255..60.255 rows=0 loops=1)
         Hash Cond: (tubesite_object.id = tubesite_image.object_ptr_id)
         ->  Bitmap Heap Scan on tubesite_object  (cost=596.77..30685.30
rows=9711 width=271) (actual time=8.682..49.721 rows=9905 loops=1)
               Recheck Cond: (site_id = 8)
               ->  Bitmap Index Scan on tubesite_object_site_id
(cost=0.00..594.34 rows=9711 width=0) (actual time=5.705..5.705
rows=9905 loops=1)
                     Index Cond: (site_id = 8)
         ->  Hash  (cost=152.88..152.88 rows=8588 width=4) (actual
time=4.281..4.281 rows=8588 loops=1)
               ->  Seq Scan on tubesite_image  (cost=0.00..152.88
rows=8588 width=4) (actual time=0.005..1.437 rows=8588 loops=1)
 Total runtime: 60.483 ms
(12 rows)


I will try to rsync prod database to 8.4.8 on test server tomorrow, and
see what happens. Hopefully upgrade to 8.4.9 (or even 8.4.10 if Debian
packages is by tomorrow) will solve the issue...

    Mario

Re: Different query plans on same servers

From
Mario Splivalo
Date:
On 12/06/2011 09:00 PM, Tom Lane wrote:
> Mario Splivalo <mario.splivalo@megafon.hr> writes:
>> I have 8.4.8 on producion and 8.4.9 on test, could that explain the
>> difference in plans chosen?
>
> I'd wonder first if you have the same statistics settings on both.
> The big problem here is that the estimation of the join size is bad
> (8588 versus 0).

Just an update here. I did downgrade postgres on testbox to 8.4.8 and
now it's choosing bad plan there too.

So we upgraded postgres on production server and the bad plan went away.
We're preparing for upgrade to 9.1 now, we hope to offload some of the
SELECTs to the slave server, we'll see how that will work.

Thank you for your inputs!

    Mario