Re: Enabling B-Tree deduplication by default - Mailing list pgsql-hackers

From Peter Geoghegan
Subject Re: Enabling B-Tree deduplication by default
Date
Msg-id CAH2-WzmBZ-oDJg_d77phsC3n6CPOVGgY+FwTZ-hZ1MfdUTY20g@mail.gmail.com
Whole thread Raw
In response to Re: Enabling B-Tree deduplication by default  (Peter Geoghegan <pg@bowt.ie>)
Responses Re: Enabling B-Tree deduplication by default  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On Wed, Jan 29, 2020 at 11:50 AM Peter Geoghegan <pg@bowt.ie> wrote:
> I should stop talking about it for now, and go back to reassessing the
> extent of the regression in highly unsympathetic cases. The patch has
> become faster in a couple of different ways since I last looked at
> this question, and it's entirely possible that the regression is even
> smaller than it was before.

I revisited the insert benchmark as a way of assessing the extent of
the regression from deduplication with a very unsympathetic case.
Background:

https://smalldatum.blogspot.com/2017/06/insert-benchmark-in-memory-intel-nuc.html

https://github.com/mdcallag/mytools/blob/master/bench/ibench/iibench.py

This workload consists of serial inserts into a table with a primary
key, plus three additional non-unique indexes. A low concurrency
benchmark seemed more likely to be regressed by the patch, so that's
what I focussed on. The indexes used have very few duplicates, and so
don't benefit from deduplication at all, with the exception of the
purchases_index_marketsegment index, which is a bit smaller (see log
files for precise details). The table (which is confusingly named
"purchases_index") had a total of three non-unique indexes, plus a
standard serial primary key. We insert 100,000,000 rows in total,
which takes under 30 minutes in each case. There are no reads, and no
updates or deletes.

There is a regression that is just shy of 2% here, as measured in
insert benchmark "rows/sec" -- this metric goes from "62190.0"
rows/sec on master to "60986.2 rows/sec" with the patch. I think that
this is an acceptable price to pay for the benefits -- this is a small
regression for a particularly unfavorable case. Also, I suspect that
this result is still quite a bit better than what you'd get with
either InnoDB or MyRocks on the same hardware (these systems were the
original targets of the insert benchmark, which was only recently
ported over to Postgres). At least, Mark Callaghan reports getting
only about 40k rows/sec inserted in 2017 with roughly comparable
hardware and test conditions (we're both running with
synchronous_commit=off, or the equivalent). We're paying a small cost
in an area where Postgres can afford to take a hit, in order to gain a
much larger benefit in an area where Postgres is much less
competitive.

I attach detailed output from runs for both master and patch.

The shell script that I used to run the benchmark is as follows:

#!/bin/sh
psql -c "create database test;"

cd $HOME/code/mytools/bench/ibench
python2 iibench.py --dbms=postgres --setup | tee iibench-output.log
python2 iibench.py --dbms=postgres --max_rows=100000000 | tee -a
iibench-output.log
psql -d test -c "SELECT pg_relation_size(oid),
pg_size_pretty(pg_relation_size(oid)),
relname FROM pg_class WHERE relnamespace = 'public'::regnamespace
ORDER BY 1 DESC LIMIT 15;" | tee -a iibench-output.log

-- 
Peter Geoghegan

Attachment

pgsql-hackers by date:

Previous
From: Ian Barwick
Date:
Subject: Re: Prevent pg_basebackup running as root
Next
From: Michael Paquier
Date:
Subject: Re: Prevent pg_basebackup running as root