On 2018/01/20 7:07, Robert Haas wrote:
> On Fri, Jan 19, 2018 at 3:56 AM, Amit Langote
> <Langote_Amit_f8@lab.ntt.co.jp> wrote:
>> I rebased the patches, since they started conflicting with a recently
>> committed patch [1].
>
> I think that my latest commit has managed to break this pretty thoroughly.
I rebased it. Here are the performance numbers again.
* Uses following hash-partitioned table:
create table t1 (a int, b int) partition by hash (a);
create table t1_x partition of t1 for values with (modulus M, remainder R)
...
* Non-bulk insert uses the following code (insert 100,000 rows one-by-one):
do $$
begin
for i in 1..100000 loop
insert into t1 values (i, i+1);
end loop;
end; $$;
Times in milliseconds:
#parts HEAD Patched
8 6148.313 4938.775
16 8882.420 6203.911
32 14251.072 8595.068
64 24465.691 13718.161
128 45099.435 23898.026
256 87307.332 44428.126
* Bulk-inserting 100,000 rows using COPY:
copy t1 from '/tmp/t1.csv' csv;
Times in milliseconds:
#parts HEAD Patched
8 466.170 446.865
16 445.341 444.990
32 443.544 487.713
64 460.579 435.412
128 469.953 422.403
256 463.592 431.118
Thanks,
Amit