Thread: Many DataFileRead - IO waits

Many DataFileRead - IO waits

From
Ben Snaidero
Date:
Running into a strange issue that just popped up on a few servers in my environment and was wondering if the community had any insight into as to what could be causing the issue.

First, a bit of background. I am running Postgres 10.11 on Windows (but have seen similar issue on a server running 11.6)
Windows Version:
Major  Minor  Build  Revision
-----  -----  -----  --------
10     0      14393  0  

I have the following query that was on average running in ~2ms suddenly jump up to on average ~25ms.  This query is called millions of time per day and there were cases of the query taking 20-30 seconds.  Below is the explain analyze of one such example.
When seeing this issue, the server was under some CPU pressure but even with that, I would not think it should get as slow as shown below as we are using SSDs and none of the windows disk counters (IOPS, queue length) show any value that would be of concern.

explain (analyze,buffers) SELECT tabledata.uuid_id,tabledata.int_id,tabledata.timestamp_date,tabledata.int_otherid,tabledata.float_value,tabledata.int_otherid2,tabledata.int_otherid3,tabledata.int_rowver
FROM tabledata WHERE timestamp_date <= $1 AND int_otherid3 IN ($2,$3,$4,$5,$6,$7) AND tabledata.int_id=$8 ORDER BY timestamp_date DESC LIMIT 1

QUERY PLAN                                                                      
 
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.71..139.67 rows=1 width=84) (actual time=17719.076..17719.077 rows=1 loops=1)
       Buffers: shared hit=12102 read=13259 written=111
       ->  Index Scan Backward using ix_tabledata_intid_timestampdate_intotherid3_intotherid2 on tabledata  (cost=0.71..2112828.54 rows=15204 width=84) (actual time=17719.056..17719.057 rows=1 loops=1)
             Index Cond: ((int_id = 8149) AND (timestamp_date <= '2020-02-24 03:05:00.013'::timestamp without time zone))
             Filter: (int_otherid3 = ANY ('{3ad2b707-a068-42e8-b0f2-6c8570953760,4e1b1bfa-34e1-48df-8cf8-2b59caf076e2,00d394dd-c2f4-4f3a-a8d4-dc208dafa686,baa904a6-8302-4fa3-b8ae-8adce8fe4306,3c99d61b-21a1-42ea-92a8-3cc88d79f3f1,befe0f8b-5911-47b3-bfae-faa9f8b09d08}'::uuid[]))
             Rows Removed by Filter: 91686
             Buffers: shared hit=12102 read=13259 written=111
 Planning time: 203.153 ms
 Execution time: 17719.200 ms
(9 rows)

If I look at pg_stat_activity while the query is running all of the calls to this query have the same wait event.
wait_event - DataFileRead
wait_event_type - IO

We took a perfview during the issue and below is the call stack from a process running this query, two call paths are shown.
---------------------------------------------------------------

Name
 ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire
+ ntoskrnl!MiChargeWsles
|+ ntoskrnl!MiObtainSystemCacheView
||+ ntoskrnl!MmMapViewInSystemCache
|| + ntoskrnl!CcGetVacbMiss
||  + ntoskrnl!CcGetVirtualAddress
||   + ntoskrnl!CcMapAndCopyFromCache
||    + ntoskrnl!CcCopyReadEx
||     + ntfs!NtfsCopyReadA
||     |+ fltmgr!FltpPerformFastIoCall
||     | + fltmgr!FltpPassThroughFastIo
||     |  + fltmgr!FltpFastIoRead
||     |   + ntoskrnl!NtReadFile
||     |    + ntdll!NtReadFile
||     |     + kernelbase!ReadFile
||     |      + msvcr120!_read_nolock
||     |       + msvcr120!_read
||     |        + postgres!PathNameOpenFile
||     |         + postgres!??mdclose
||     |          + postgres!ScheduleBufferTagForWriteback
||     |           + postgres!InitBufTable
||     |            + postgres!??PrefetchBuffer
||     |            |+ postgres!index_getnext_tid
||     |            | + postgres!index_fetch_heap
||     |            |  + postgres!ExecIndexEvalRuntimeKeys
||     |            |   + postgres!ExecAssignScanProjectionInfoWithVarno
||     |            |    + postgres!tupledesc_match
||     |            |     + postgres!recompute_limits
||     |            |      + postgres!CheckValidRowMarkRel
||     |            |       + postgres!list_length
||     |            |        + pg_stat_statements!pgss_ExecutorRun
||     |            |         + postgres!PortalRunFetch
||     |            |          + postgres!PortalStart
||     |            |           + postgres!exec_bind_message
||     |            |            + postgres!PostgresMain
||     |            |             + postgres!BackendInitialize
||     |            |              + postgres!ClosePostmasterPorts
||     |            |               + postgres!main
||     |            |                + postgres!_onexit
||     |            |                 + kernel32!BaseThreadInitThunk
||     |            |                  + ntdll!RtlUserThreadStart


Name
 ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire
+ ntoskrnl!MiChargeWsles
|+ ntoskrnl!MiReleaseSystemCacheView
| + ntoskrnl!MmUnmapViewInSystemCache
|  + ntoskrnl!CcUnmapVacb
|   + ntoskrnl!CcUnmapVacbArray
|    + ntoskrnl!CcGetVirtualAddress
|     + ntoskrnl!CcMapAndCopyFromCache
|      + ntoskrnl!CcCopyReadEx
|       + ntfs!NtfsCopyReadA
|       |+ fltmgr!FltpPerformFastIoCall
|       | + fltmgr!FltpPassThroughFastIo
|       |  + fltmgr!FltpFastIoRead
|       |   + ntoskrnl!NtReadFile
|       |    + ntdll!NtReadFile
|       |    |+ kernelbase!ReadFile
|       |    | + msvcr120!_read_nolock
|       |    |  + msvcr120!_read
|       |    |   + postgres!PathNameOpenFile
|       |    |    + postgres!??mdclose
|       |    |     + postgres!ScheduleBufferTagForWriteback
|       |    |      + postgres!InitBufTable
|       |    |       + postgres!??PrefetchBuffer
|       |    |       |+ postgres!index_getnext_tid
|       |    |       | + postgres!index_fetch_heap
|       |    |       |  + postgres!ExecIndexEvalRuntimeKeys
|       |    |       |   + postgres!ExecAssignScanProjectionInfoWithVarno
|       |    |       |    + postgres!tupledesc_match
|       |    |       |     + postgres!recompute_limits
|       |    |       |      + postgres!CheckValidRowMarkRel
|       |    |       |       + postgres!list_length
|       |    |       |        + pg_stat_statements!pgss_ExecutorRun
|       |    |       |         + postgres!PortalRunFetch
|       |    |       |          + postgres!PortalStart
|       |    |       |           + postgres!exec_bind_message
|       |    |       |            + postgres!PostgresMain
|       |    |       |             + postgres!BackendInitialize
|       |    |       |              + postgres!ClosePostmasterPorts
|       |    |       |               + postgres!main
|       |    |       |                + postgres!_onexit
|       |    |       |                 + kernel32!BaseThreadInitThunk
|       |    |       |                  + ntdll!RtlUserThreadStart



If I do a top down (ie from when the process started where did we spend the most time) I get:
Name
 ROOT
+ Process64 postgres (16668) Args:  "--forkbackend" "43216"
 + Thread (16672) CPU=9399ms
 |+ ntdll!RtlUserThreadStart
 ||+ kernel32!BaseThreadInitThunk
 || + postgres!_onexit
 ||  + postgres!main
 ||   + postgres!ClosePostmasterPorts
 ||    + postgres!BackendInitialize
 ||     + postgres!PostgresMain
 ||      + postgres!exec_bind_message
 ||       + postgres!PortalStart
 ||        + postgres!PortalRunFetch
 ||         + pg_stat_statements!pgss_ExecutorRun
 ||          + postgres!list_length
 ||           + postgres!CheckValidRowMarkRel
 ||            + postgres!recompute_limits
 ||             + postgres!tupledesc_match
 ||              + postgres!ExecAssignScanProjectionInfoWithVarno
 ||              |+ postgres!ExecIndexEvalRuntimeKeys
 ||              ||+ postgres!index_fetch_heap
 ||              || + postgres!index_getnext_tid
 ||              || |+ postgres!??PrefetchBuffer
 ||              || ||+ postgres!InitBufTable
 ||              || |||+ postgres!ScheduleBufferTagForWriteback
 ||              || ||||+ postgres!??mdclose
 ||              || |||||+ postgres!PathNameOpenFile
 ||              || ||||||+ msvcr120!_read
 ||              || |||||| + msvcr120!_read_nolock
 ||              || |||||| |+ kernelbase!ReadFile
 ||              || |||||| ||+ ntdll!NtReadFile
 ||              || |||||| || + ntoskrnl!NtReadFile
 ||              || |||||| || |+ fltmgr!FltpFastIoRead
 ||              || |||||| || ||+ fltmgr!FltpPassThroughFastIo
 ||              || |||||| || |||+ fltmgr!FltpPerformFastIoCall
 ||              || |||||| || ||||+ ntfs!NtfsCopyReadA
 ||              || |||||| || |||| + ntoskrnl!CcCopyReadEx
 ||              || |||||| || |||| |+ ntoskrnl!CcMapAndCopyFromCache
 ||              || |||||| || |||| | + ntoskrnl!CcGetVirtualAddress
 ||              || |||||| || |||| | |+ ntoskrnl!CcUnmapVacbArray
 ||              || |||||| || |||| | ||+ ntoskrnl!CcUnmapVacb
 ||              || |||||| || |||| | |||+ ntoskrnl!MmUnmapViewInSystemCache
 ||              || |||||| || |||| | ||| + ntoskrnl!ExAcquireSpinLockExclusive
 ||              || |||||| || |||| | ||| |+ ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire
 
 
Also from this same perfview the following looks to be from the checkpoint process waiting on the same lock
 
 Name
 ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire
+ ntoskrnl!ExAcquireSpinLockExclusive
 + ntoskrnl!MiAcquireProperVm
  + ntoskrnl!MiTrimSharedPageFromViews
   + ntoskrnl!MiTrimSection
    + ntoskrnl!MmTrimSection
     + ntoskrnl!CcCoherencyFlushAndPurgeCache
      + ntfs!NtfsFlushUserStream
       + ntfs!NtfsPerformOptimisticFlush
       |+ ntfs!NtfsCommonFlushBuffers
       | + ntfs!NtfsCommonFlushBuffersCallout
       |  + ntoskrnl!KeExpandKernelStackAndCalloutInternal
       |   + ntfs!NtfsCommonFlushBuffersOnNewStack
       |    + ntfs!NtfsFsdFlushBuffers
       |     + fltmgr!FltpLegacyProcessingAfterPreCallbacksCompleted
       |      + fltmgr!FltpDispatch
       |       + ntoskrnl!IopSynchronousServiceTail
       |        + ntoskrnl!NtFlushBuffersFileEx
       |         + ntoskrnl!NtFlushBuffersFile
       |          + ntdll!NtFlushBuffersFile
       |          |+ kernelbase!FlushFileBuffers
       |          | + msvcr120!_commit
       |          |  + postgres!FileClose
       |          |   + postgres!mdtruncate
       |          |    + postgres!??ReleaseBuffer
       |          |     + postgres!CreateCheckPoint
       |          |      + postgres!CheckpointerMain
       |          |       + postgres!AuxiliaryProcessMain
       |          |        + postgres!MaxLivePostmasterChildren
       |          |         + postgres!main
       |          |          + postgres!_onexit
       |          |           + kernel32!BaseThreadInitThunk
       |          |            + ntdll!RtlUserThreadStart


In order to get by we increased the shared_buffers from 500MB to 50GB on this server (and 10GB on another server) but in my opinion this is just masking the issue.  Was wondering if anyone in the community has seen contention with this lock before or has other any insights as to why we would suddenly run into this issue?


Ben Snaidero
Geotab
Senior Database Specialist
Direct+1 (289) 230-7749
Toll-free+1 (877) 436-8221
Visitwww.geotab.com
Twitter|Facebook|YouTube|LinkedIn

Join us at Connect 2020

San Diego

January 13 - 16, 2020

Register Now!

Re: Many DataFileRead - IO waits

From
Michael Lewis
Date:
How big is ix_tabledata_intid_timestampdate_intotherid3_intotherid2 on disk? If you create another index with same fields, how much space does it take? Real question- are you vacuuming aggressively enough for your workload? Your index name seems to indicate that intotherid3 would be the third key, and yet the planner chose not to scan that deep and instead filtered after it found the relevant tuples based on intid and timestampdate. That seems peculiar to me.

The documentation discourages multi-column indexes because they have limited application unless the same fields are always used. Personally, I don't love reviewing the stats of indexscans or how many tuples were fetched and having to guess how deeply the index was scanned for the various queries involved.

I'd wonder if an index on only intid_timestampdate would be both much smaller and also have a more right-leaning pattern of information being added and accessed in terms of keeping frequently needing blocks in shared buffers.

As a side note, that planning time seems high to me for such a simple query. Have you increased default_statistics_target significantly perhaps?

Re: Many DataFileRead - IO waits

From
Ben Snaidero
Date:
On Thu, Feb 27, 2020 at 11:54 AM Michael Lewis <mlewis@entrata.com> wrote:
How big is ix_tabledata_intid_timestampdate_intotherid3_intotherid2 on disk? If you create another index with same fields, how much space does it take? Real question- are you vacuuming aggressively enough for your workload? Your index name seems to indicate that intotherid3 would be the third key, and yet the planner chose not to scan that deep and instead filtered after it found the relevant tuples based on intid and timestampdate. That seems peculiar to me.

The documentation discourages multi-column indexes because they have limited application unless the same fields are always used. Personally, I don't love reviewing the stats of indexscans or how many tuples were fetched and having to guess how deeply the index was scanned for the various queries involved.

I'd wonder if an index on only intid_timestampdate would be both much smaller and also have a more right-leaning pattern of information being added and accessed in terms of keeping frequently needing blocks in shared buffers.

As a side note, that planning time seems high to me for such a simple query. Have you increased default_statistics_target significantly perhaps?

In this case the index is quite large ~400GB but as you can see from the explain plan it's doing a backward scan and not accessing that many buffers.  Other servers with this issue are much smaller.  We have autovacuum set to the default setting but this table does not get any deletes so I don't think that is the problem.  I think the reason it does not go deeper into the index keys is because it's just looking for the first occurence based on date (limit 1) not all of them although even if looking for all of them I think it would still scan in the same way since there would be other intotherid3 values between the ones in this search key

In regards to default_statistics_target I have not increased this value at all.

All this said regarding statistics and vacuum/bloat we restored a two day old copy of the database (on one of the servers experiencing the issue) and the issue was still present.  These systems are all on cloud infrastructure so I am leaning towards it being something hardware related (especially as it's only happening on a few servers) but our cloud provider says nothing has changed in this respect.

Re: Many DataFileRead - IO waits

From
Michael Lewis
Date:
If no updates or deletes are happening on the table, it would be best practice to set up a scheduled manual vacuum analyze to ensure statistics and the visibility map is updated. Other than creating the index on the first two columns only, I'm out of ideas. Hopefully someone running Postgres at large scale on Windows will chime in.

Re: Many DataFileRead - IO waits

From
Ben Snaidero
Date:
On Fri, Feb 28, 2020 at 11:41 AM Michael Lewis <mlewis@entrata.com> wrote:
If no updates or deletes are happening on the table, it would be best practice to set up a scheduled manual vacuum analyze to ensure statistics and the visibility map is updated. Other than creating the index on the first two columns only, I'm out of ideas. Hopefully someone running Postgres at large scale on Windows will chime in.

Yep.  We run manual vacuum freeze analyze weekly to ensure visibility map is updated and statistics are up to date.

Thanks for taking the time to look. 

Re: Many DataFileRead - IO waits

From
legrand legrand
Date:
Hello,
I'm not able to use your perfs diagrams, 
but it seems to me that not using 3rd column of that index (int_otherid2)
generates an IO problem.

Could you give us the result of

explain (analyze,buffers) SELECT

tabledata.uuid_id,tabledata.int_id,tabledata.timestamp_date,tabledata.int_otherid,tabledata.float_value,tabledata.int_otherid2,tabledata.int_otherid3,tabledata.int_rowver
FROM tabledata 
WHERE timestamp_date <= '2020-02-24 03:05:00.013'::timestamp without time
zone
ND int_otherid3 = '3ad2b707-a068-42e8-b0f2-6c8570953760'
AND tabledata.int_id=8149 
ORDER BY timestamp_date DESC 
LIMIT 1

and this for each value of int_otherid3 ?
and tell us if you are able to change the sql ?

Thanks
Regards
PAscal



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: Many DataFileRead - IO waits

From
Jeff Janes
Date:
On Thu, Feb 27, 2020 at 11:33 AM Ben Snaidero <bensnaidero@geotab.com> wrote:
 
I have the following query that was on average running in ~2ms suddenly jump up to on average ~25ms. 

What are you averaging over?  The plan you show us is slow enough that if you were averaging over the last 1000 executions, that one execution could skew the entire average just by itself.  When individual execution times can vary over 4 powers of 10, I don't think averages are a very good way of analyzing things.

 
This query is called millions of time per day and there were cases of the query taking 20-30 seconds.  Below is the explain analyze of one such example.
When seeing this issue, the server was under some CPU pressure but even with that, I would not think it should get as slow as shown below as we are using SSDs and none of the windows disk counters (IOPS, queue length) show any value that would be of concern.

What is the average and expected random read latency on your SSDs?  Have you benchmarked them (outside of the database system) to see if they are performing as expected?
 
             Rows Removed by Filter: 91686
             Buffers: shared hit=12102 read=13259 written=111

Do the faster executions have fewer rows removed by the filter (and fewer buffers read), or are they just faster despite having about the same values?

 
We took a perfview during the issue and below is the call stack from a process running this query, two call paths are shown.

I've never used perfview.  But if I try to naively interpret it similar to gdb backtrace, it doesn't make much sense to me.  InitBufTable is only called by "postmaster" while starting the database, how could it be part of call paths during regular operations?  Are these views of the slow-running back end itself, or of some other postgresql process which was idle at the time the snapshot was taken?

Cheers,

Jeff

Re: Many DataFileRead - IO waits

From
Ben Snaidero
Date:
On Fri, Feb 28, 2020 at 2:00 PM legrand legrand <legrand_legrand@hotmail.com> wrote:
Hello,
I'm not able to use your perfs diagrams,
but it seems to me that not using 3rd column of that index (int_otherid2)
generates an IO problem.

Could you give us the result of

explain (analyze,buffers) SELECT
tabledata.uuid_id,tabledata.int_id,tabledata.timestamp_date,tabledata.int_otherid,tabledata.float_value,tabledata.int_otherid2,tabledata.int_otherid3,tabledata.int_rowver
FROM tabledata
WHERE timestamp_date <= '2020-02-24 03:05:00.013'::timestamp without time
zone
ND int_otherid3 = '3ad2b707-a068-42e8-b0f2-6c8570953760'
AND tabledata.int_id=8149
ORDER BY timestamp_date DESC
LIMIT 1

and this for each value of int_otherid3 ?
and tell us if you are able to change the sql ?

Thanks
Regards
PAscal



Thanks for the suggestion.  Yes I could change the sql and when using only one filter for int_otherid2 it does use all 3 columns as the index key. 
 
explain (analyze,buffers) SELECT uuid_id,int_id,timestamp_date,int_otherid,float_value,int_otherid2,int_otherid3,int_rowver
FROM tabledata WHERE dtdatetime <= '2020-01-20 03:05:00.013' AND gDiagnosticId IN ('3c99d61b-21a1-42ea-92a8-3cc88d79f3f1') AND
  ivehicleid=8149 ORDER BY dtdatetime DESC LIMIT 1
                                                                                              QUERY PLAN                                                                                              
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.71..85.13 rows=1 width=84) (actual time=300.820..300.821 rows=1 loops=1)
   Buffers: shared hit=17665 read=1
   ->  Index Scan Backward using ix_tabledata_intid_timestampdate_intotherid3_intotherid2 on tabledata (cost=0.71..41960.39 rows=497 width=84) (actual time=300.808..300.809 rows=1 loops=1)
         Index Cond: ((int_id = 8149) AND (timestamp_date <= '2020-01-20 03:05:00.013'::timestamp without time zone) AND (int_otherid2 = '3c99d61b-21a1-42ea-92a8-3cc88d79f3f1'::uuid))
         Buffers: shared hit=17665 read=1
 Planning time: 58.769 ms
 Execution time: 300.895 ms
(7 rows) 

I still haven't been able to explain why this changed all of a sudden (I am working on reproducing this error in a test environment) but this could be a good workaround.  I might be able to just make 6 calls or maybe rewrite the original query some other way in order to get it to use all 3 keys of the index.  I'll have to do some more testing 

Thanks again.

Re: Many DataFileRead - IO waits

From
Ben Snaidero
Date:
On Sat, Feb 29, 2020 at 11:22 AM Jeff Janes <jeff.janes@gmail.com> wrote:
On Thu, Feb 27, 2020 at 11:33 AM Ben Snaidero <bensnaidero@geotab.com> wrote:
 
I have the following query that was on average running in ~2ms suddenly jump up to on average ~25ms. 

What are you averaging over?  The plan you show us is slow enough that if you were averaging over the last 1000 executions, that one execution could skew the entire average just by itself.  When individual execution times can vary over 4 powers of 10, I don't think averages are a very good way of analyzing things.

 
This query is called millions of time per day and there were cases of the query taking 20-30 seconds.  Below is the explain analyze of one such example.
When seeing this issue, the server was under some CPU pressure but even with that, I would not think it should get as slow as shown below as we are using SSDs and none of the windows disk counters (IOPS, queue length) show any value that would be of concern.

What is the average and expected random read latency on your SSDs?  Have you benchmarked them (outside of the database system) to see if they are performing as expected?
 
             Rows Removed by Filter: 91686
             Buffers: shared hit=12102 read=13259 written=111

Do the faster executions have fewer rows removed by the filter (and fewer buffers read), or are they just faster despite having about the same values?

 
We took a perfview during the issue and below is the call stack from a process running this query, two call paths are shown.

I've never used perfview.  But if I try to naively interpret it similar to gdb backtrace, it doesn't make much sense to me.  InitBufTable is only called by "postmaster" while starting the database, how could it be part of call paths during regular operations?  Are these views of the slow-running back end itself, or of some other postgresql process which was idle at the time the snapshot was taken?

Cheers,

Jeff

Query statistics were averaged over ~3million calls so I don't think a single outlier would skew the results too much.   

The perfview call stack is similar to gdb backtrace.  I am 99% sure that this call path is from the backend running this query as we queried pg_stat_activity at the time of the perfview and cross-referenced the PIDs.  That said I am going to try building on windows with debug symbols enabled and see if I can use gdb to debug and confirm.

Re: Many DataFileRead - IO waits

From
legrand legrand
Date:
> Thanks for the suggestion.  Yes I could change the sql and when using only
> one filter for int_otherid2 it does use all 3 columns as the index key.

explain (analyze,buffers) SELECT
uuid_id,int_id,timestamp_date,int_otherid,float_value,int_otherid2,int_otherid3,int_rowver
FROM tabledata WHERE dtdatetime <= '2020-01-20 03:05:00.013' AND
gDiagnosticId IN ('3c99d61b-21a1-42ea-92a8-3cc88d79f3f1') AND
  ivehicleid=8149 ORDER BY dtdatetime DESC LIMIT 1

                  QUERY PLAN


------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.71..85.13 rows=1 width=84) (actual time=300.820..300.821
rows=1 loops=1)
   Buffers: shared hit=17665 read=1
   ->  Index Scan Backward using
ix_tabledata_intid_timestampdate_intotherid3_intotherid2 on tabledata
(cost=0.71..41960.39 rows=497 width=84) (actual time=300.808..300.809
rows=1 loops=1)
         Index Cond: ((int_id = 8149) AND (timestamp_date <= '2020-01-20
03:05:00.013'::timestamp without time zone) AND (int_otherid2 =
'3c99d61b-21a1-42ea-92a8-3cc88d79f3f1'::uuid))
         Buffers: shared hit=17665 read=1
 Planning time: 58.769 ms
 Execution time: 300.895 ms
(7 rows)

> I still haven't been able to explain why this changed all of a sudden (I
> am
> working on reproducing this error in a test environment) but this could be
> a good workaround.  I might be able to just make 6 calls or maybe rewrite
> the original query some other way in order to get it to use all 3 keys of
> the index.  I'll have to do some more testing

Parsing of 58 ms and 300 ms for 17665 memory blocks read is very very bad
...
Are those shared buffers in memory or SWAPPED ?
Is the server CPU bounded or limited ?

May be you should dump some data for a test case on an other platform 
(any desktop) to get a comparison point

Regards
PAscal



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html