Re: select max(field) from table much faster with a group by clause? - Mailing list pgsql-performance

From Palle Girgensohn
Subject Re: select max(field) from table much faster with a group by clause?
Date
Msg-id DFCABF1E9EE0F7A9D44CA50A@rambutan.pingpong.net
Whole thread Raw
In response to Re: select max(field) from table much faster with a group by clause?  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: select max(field) from table much faster with a group by clause?
Re: select max(field) from table much faster with a group by clause?
List pgsql-performance

--On torsdag, november 01, 2007 09.43.39 -0400 Tom Lane <tgl@sss.pgh.pa.us>
wrote:

> Palle Girgensohn <girgen@pingpong.net> writes:
>> When running
>>  select max("when") from login where userid='userid'
>> it takes a second or two, but when adding "group by userid" the planner
>> decides on using another plan, and it gets *much* faster. See example
>> below.
>
> It's only faster for cases where there are few or no rows for the
> particular userid ...
>
>> Number of tuples per user varies from zero to a couple of thousands.
>
> The planner is using an intermediate estimate of 406 rows.  You might be
> well advised to increase the statistics target for login.userid --- with
> luck that would help it to choose the right plan type for both common
> and uncommon userids.

Unfortunately, altering statistics doesn't help. I see no difference when
altering the value from 10 (default) to 100, 1000 or 100000. :-(

Are there any other things I can modify?

OH, btw, maybe something in the postgresql.conf sucks?

max_connections = 100
shared_buffers = 30000                  # min 16 or max_connections*2, 8KB
each
temp_buffers = 2500                     # min 100, 8KB each
max_prepared_transactions = 100         # can be 0 or more
work_mem = 16384                        # min 64, size in KB
maintenance_work_mem = 16384            # min 1024, size in KB
max_stack_depth = 32768                 # min 100, size in KB
max_fsm_pages = 500000
max_fsm_relations = 20000
max_files_per_process = 2000
fsync = off
checkpoint_segments = 50                # in logfile segments, min 1, 16MB
each
effective_cache_size = 10000            # typically 8KB each
random_page_cost = 1.8
geqo = on
geqo_threshold = 10
from_collapse_limit = 8
join_collapse_limit = 8         # 1 disables collapsing of explicit


pgsql-performance by date:

Previous
From: Sami Dalouche
Date:
Subject: Re: [Fwd: Re: Outer joins and Seq scans]
Next
From: Tom Lane
Date:
Subject: Re: select max(field) from table much faster with a group by clause?