On Wed, Mar 30, 2016 at 4:10 PM, Andres Freund <andres@anarazel.de> wrote:
> Indeed. On SSDs I see about a 25-35% gain, on HDDs about 5%. If I
> increase the size of backend_flush_after to 64 (like it's for bgwriter)
> I however do get about 15% for HDDs as well.
I tried the same test mentioned in the original post on cthulhu (EDB
machine, CentOS 7.2, 8 sockets, 8 cores per socket, 2 threads per
core, Xeon E7-8830 @ 2.13 GHz). I attempted to test both the effects
of multi_extend_v21 and the *_flush_after settings. The machine has
both HD and SSD, but I used HD for this test.
master, logged tables, 4 parallel copies: 1m15.411s, 1m14.248s, 1m15.040s
master, logged tables, 1 copy: 0m28.336s, 0m28.040s, 0m29.576s
multi_extend_v21, logged tables, 4 parallel copies: 0m46.058s,
0m44.515s, 0m45.688s
multi_extend_v21, logged tables, 1 copy: 0m28.440s, 0m28.129s, 0m30.698s
master, logged tables, 4 parallel copies,
{backend,bgwriter}_flush_after=0: 1m2.817s, 1m4.467s, 1m12.319s
multi_extend_v21, logged tables, 4 parallel copies,
{backend,bgwriter}_flush_after=0: 0m41.301s, 0m41.104s, 0m41.342s
master, logged tables, 1 copy, {backend,bgwriter}_flush_after=0:
0m26.948s, 0m26.829s, 0m26.616s
So the flushing is a small win with only 1 parallel copy, but with 4
parallel copies it's a significant loss. However, the relation
extension patch reduces the regression significantly, probably because
it makes it far more likely that a backend doing a flush is flushing a
consecutive range of blocks all of which it added to the relation, so
that there is no interleaving.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company