Re: Query RE: Optimising UUID Lookups - Mailing list pgsql-performance

From David Rowley
Subject Re: Query RE: Optimising UUID Lookups
Date
Msg-id CAApHDvouOYKYdaLgo4PaeL1sMWrXj0-fj0koWHyqtE5Yc4_cDA@mail.gmail.com
Whole thread Raw
In response to Re: Query RE: Optimising UUID Lookups  (Roland Dunn <roland.dunn@gmail.com>)
Responses Re: Query RE: Optimising UUID Lookups  (Roland Dunn <roland.dunn@gmail.com>)
List pgsql-performance
On 21 March 2015 at 23:34, Roland Dunn <roland.dunn@gmail.com> wrote:

If we did add more RAM, would it be the effective_cache_size setting
that we would alter? Is there a way to force PG to load a particular
table into RAM? If so, is it actually a good idea?

Have you had a look at EXPLAIN (ANALYZE, BUFFERS) for the query?

Pay special attention to "Buffers: shared read=NNN" and "Buffers: shared hit=NNN", if you're not reading any buffers between runs then the pages are in the PostgreSQL shared buffers. By the looks of your config you have 10GB of these. On the other hand if you're getting buffer reads, then they're either coming from disk, or from the OS cache. PostgreSQL won't really know the difference.

If you're not getting any buffer reads and it's still slow, then the problem is not I/O

Just for fun... What happens if you stick the 50 UUIDs in some table, analyze it, then perform a join between the 2 tables, using IN() or EXISTS()... Is that any faster?

Also how well does it perform with: set enable_bitmapscan = off; ?

Regards

David Rowley



pgsql-performance by date:

Previous
From: Maxim Boguk
Date:
Subject: Re: Query RE: Optimising UUID Lookups
Next
From: Roland Dunn
Date:
Subject: Re: Query RE: Optimising UUID Lookups