On Mon, Jan 2, 2017 at 3:35 PM, Adrian Klaver <adrian.klaver@aklaver.com> wrote:
>>> Same code across network, client in Bellingham WA, server in Fremont CA:
>>>
>>> Without autocommit:
>>>
>>> In [51]: %timeit -n 10 cur.executemany(sql, l)
>>> 10 loops, best of 3: 8.22 s per loop
>>>
>>>
>>> With autocommit:
>>>
>>> In [56]: %timeit -n 10 cur.executemany(sql, l)
>>> 10 loops, best of 3: 8.38 s per loop
>>
>> Adrian, have you got a benchmark "classic vs. joined" on remote
>> network? Thank you.
>
> With NRECS=10000 and page size=100:
>
> aklaver(at)tito:~> python psycopg_executemany.py -p 100
> classic: 427.618795156 sec
> joined: 7.55754685402 sec
This is really interesting. I have long been using a utility I put
together to insert using BINARY COPY. In fact I just brushed it up a
bit and put it on PyPi: <https://pypi.python.org/pypi/pgcopy>
I'm curious to run a benchmark against the improved executemany. I'd
hoped that pgcopy would be generally useful, but it may no longer be
necessary. A fast executemany() certainly suits more use cases.
Best,
Aryeh Leib Taurog