Sometime back on one of the PostgreSQL blog [1], there was
discussion about the performance of drop/truncate table for
large values of shared_buffers and it seems that as the value
of shared_buffers increase the performance of drop/truncate
table becomes worse. I think those are not often used operations,
so it never became priority to look into improving them if possible.
I have looked into it and found that the main reason for such
a behaviour is that for those operations it traverses whole
shared_buffers and it seems to me that we don't need that
especially for not-so-big tables. We can optimize that path
by looking into buff mapping table for the pages that exist in
shared_buffers for the case when table size is less than some
threshold (say 25%) of shared buffers.
Attached patch implements the above idea and I found that
performance doesn't dip much with patch even with large value
of shared_buffers. I have also attached script and sql file used
to take performance data.
m/c configuration
--------------------------
IBM POWER-7 16 cores, 64 hardware threads
RAM = 64GB
Shared_buffers (MB)/Tps | 8 | 32 | 128 | 1024 | 8192 |
HEAD – commit | 138 | 130 | 124 | 103 | 48 |
Patch | 138 | 132 | 132 | 130 | 133 |
I have observed that this optimization has no effect if the value of
shared_buffers is small (say 8MB, 16MB, ..), so I have used it only
when value of shared_buffers is greater than equal to 32MB.
We might want to use similar optimisation for DropRelFileNodeBuffers()
as well.