Re: Help me in reducing the CPU cost for the high cost query below,as it is hitting production seriously!! - Mailing list pgsql-performance

From Justin Pryzby
Subject Re: Help me in reducing the CPU cost for the high cost query below,as it is hitting production seriously!!
Date
Msg-id 20180523134322.GK30060@telsasoft.com
Whole thread Raw
In response to Re: Help me in reducing the CPU cost for the high cost query below,as it is hitting production seriously!!  (pavan95 <pavan.postgresdba@gmail.com>)
Responses Re: Help me in reducing the CPU cost for the high cost query below,as it is hitting production seriously!!
List pgsql-performance
On Wed, May 23, 2018 at 12:01:06AM -0700, pavan95 wrote:
> As said, created index on the res_users.res_employee_id and the below link
> is the explain plan result.
> 
> Link:  https://explain.depesz.com/s/hoct
> 
> And the cost of Previous query is 92,129 and the cost of current modified
> query after creating the above said index is 91,462. But good thing is we

Forget the cost - that's postgres *model* of the combined IO+CPU.
If the model is off, that's may cause bad plans and could be looked into
further.

In any case, that index cut your runtime from 75sec to 60sec (in spite of the
modelled cost).

It looks like you resolved the bad estimate on the users table?

> 2. tms_workflow_history:
> Indexes:
>     "tms_workflow_history_pkey" PRIMARY KEY, btree (id)
>     "curract_state_isfinal_app_idx" btree (current_activity, state, is_final_approver)
>     "timesheet_id_group_id_active_idx" btree (timesheet_id, group_id, active)
>     "tms_wkf_his_active_is_final_approveridx" btree (active, is_final_approver)
>     "tms_wkf_his_group_id_idx" btree (group_id)
>     "tms_wkf_his_timesheet_id_idx" btree (timesheet_id)
>     "tms_wkf_hist_current_activity_idx" btree (current_activity)
>     "tms_wkf_hist_state_idx" btree (state)
>     "wsr_id_idx" btree (wsr_id)

How big is the table ?  And curract_state_isfinal_app_idx ?
Have these been reindexed (or pg_repacked) recently?

It seems to me that the remaining query optimization is to improve this:
> Bitmap Heap Scan on tms_workflow_history history (cost=193.19..1,090.50 rows=6,041 width=12) (actual
time=3.692..15.714rows=11,351 loops=1)
 

I think you could consider clustering (or repacking) the table on
curract_state_isfinal_app_idx (but you'll have to judge if that's okay and
won't negatively affect other queries).

But, what's your target runtime ?  Improvements here could cut at most 15sec
off the total 60sec.  If you're hoping to save more than that, you'll need to
(also) look further than the query:

 - postgres parameters: what are shared_buffers, work_mem, effective_cache_size ?
   + https://wiki.postgresql.org/wiki/Server_Configuration
 - are there other DBs/applications running on the server/VM ?
 - kernel tuning (readahead, VM parameters, write cache, scheduler, THP, etc)
 - server hardware (what OS? storage? RAM? filesystem?)
 - how does the storage perform outside of postgres?
   + something like this: /usr/sbin/bonnie++ -f -n0 -x4 -d /var/lib/pgsql

Justin


pgsql-performance by date:

Previous
From: pavan95
Date:
Subject: Re: Help me in reducing the CPU cost for the high cost query below,as it is hitting production seriously!!
Next
From: pavan95
Date:
Subject: Re: Help me in reducing the CPU cost for the high cost query below,as it is hitting production seriously!!