Re: Searching in varchar column having 100M records - Mailing list pgsql-performance

From Tomas Vondra
Subject Re: Searching in varchar column having 100M records
Date
Msg-id 20190717124846.xbvcvfjg5hdcnaed@development
Whole thread Raw
In response to Re: Searching in varchar column having 100M records  (Sergei Kornilov <sk@zsrv.org>)
Responses Re: Searching in varchar column having 100M records  (Andreas Kretschmer <andreas@a-kretschmer.de>)
List pgsql-performance
On Wed, Jul 17, 2019 at 02:53:20PM +0300, Sergei Kornilov wrote:
>Hello
>
>Please recheck with track_io_timing = on in configuration. explain
>(analyze,buffers) with this option will report how many time we spend
>during i/o
>
>>   Buffers: shared hit=2 read=31492
>
>31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD
>
>Your query reads table data from disks (well, or from OS cache). You need
>more RAM for shared_buffers or disks with better performance.
>

Either that, or try creating a covering index, so that the query can do an
index-only scan. That might reduce the amount of IO against the table, and
in the index the data should be located close to each other (same page or
pages close to each other).

So try something like

    CREATE INDEX ios_idx ON table (field, user_id);

and make sure the table is vacuumed often enough (so that the visibility
map is up to date).


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




pgsql-performance by date:

Previous
From: Sergei Kornilov
Date:
Subject: Re: Searching in varchar column having 100M records
Next
From: Andreas Kretschmer
Date:
Subject: Re: Searching in varchar column having 100M records