Re: When to use PARTITION BY HASH? - Mailing list pgsql-general

From Jeff Janes
Subject Re: When to use PARTITION BY HASH?
Date
Msg-id CAMkU=1xRi8eBb5=TKxLOwzScagzzBU0A+Bf33fK3xyVXCis2Qw@mail.gmail.com
Whole thread Raw
In response to Re: When to use PARTITION BY HASH?  (Oleksandr Shulgin <oleksandr.shulgin@zalando.de>)
Responses Re: When to use PARTITION BY HASH?  (Ron <ronljohnsonjr@gmail.com>)
List pgsql-general
On Fri, Jun 5, 2020 at 6:12 AM Oleksandr Shulgin <oleksandr.shulgin@zalando.de> wrote:
On Thu, Jun 4, 2020 at 4:32 PM Jeff Janes <jeff.janes@gmail.com> wrote:
On Wed, Jun 3, 2020 at 7:55 AM Oleksandr Shulgin <oleksandr.shulgin@zalando.de> wrote:

With hash partitioning you are not expected, in general, to end up with a small number of partitions being accessed more heavily than the rest.  So your indexes will also not fit into memory.

I have the feeling that using a hash function to distribute rows simply contradicts the basic assumption of when you would think of partitioning your table at all: that is to make sure the most active part of the table and indexes is small enough to be cached in memory.

While hash partitioning doesn't appeal to me, I think this may be overly pessimistic.  It would not be all that unusual for your customers to take turns being highly active and less active.  Especially if you do occasional bulk loads all with the same customer_id for any given load, for example.

For a bulk load you'd likely want to go with an empty partition w/o indexes and build them later, after loading the tuples. 

That only works if the bulk load is starting from zero.  If you are adding a million rows to something that already has 100 million, you would probably spend more time rebuilding the indexes than you saved by dropping them.  And of course to go with an empty partition, you have to be using partitioning of some kind to start with; and then you need to be futzing around creating/detaching and indexing and attaching.  With hash partitioning, you might get much of the benefit with none of the futzing.
 
So while you might not have a permanently hot partition, you could have partitions which are hot in turn.  Of course you could get the same benefit (and probably better) with list or range partitioning rather than hash, but then you have to maintain those lists or ranges when you add new customers.

Why are LRU eviction from the shared buffers and OS disk cache not good enough to handle this?

Data density.  If the rows are spread out randomly throughout the table, the density of currently relevant tuples per MB of RAM is much lower than if they are in partitions which align with current relevance.  Of course you could CLUSTER the table on what would otherwise be the partition key, but clustered tables don't stay clustered, while partitioned ones stay partitioned.  Also, clustering the table wouldn't help with the relevant data density in the indexes (other than the index being clustered on, or other ones highly correlated with that one).  This can be particularly important for index maintenance and with HDD, as the OS disk cache is in my experince pretty bad at deciding when to write dirty blocks which have been handed to it, versus retain them in the hopes they will be re-dirtied soon, or have adjacent blocks dirtied and then combined into one write.  
 

This actually applies to any partitioning scheme: the hot dataset could be recognized by these caching layers.  Does it not happen in practice?

Caching only happens at the page level, not the tuple level.  So if your hot tuples are interspersed with cold ones, you can get poor caching effectiveness.

Cheers,

Jeff

pgsql-general by date:

Previous
From: Achilleas Mantzios
Date:
Subject: Re: Oracle vs. PostgreSQL - a comment
Next
From: Koen De Groote
Date:
Subject: Re: Index no longer being used, destroying and recreating it restores use.