Re: [psycopg] speed concerns with executemany() - Mailing list psycopg

From Aryeh Leib Taurog
Subject Re: [psycopg] speed concerns with executemany()
Date
Msg-id 20170119122315.GA2605@deb76.aryehleib.com
Whole thread Raw
In response to Re: [psycopg] speed concerns with executemany()  (Adrian Klaver <adrian.klaver@aklaver.com>)
Responses Re: [psycopg] speed concerns with executemany()  (Aryeh Leib Taurog <python@aryehleib.com>)
Re: [psycopg] speed concerns with executemany()  (Daniele Varrazzo <daniele.varrazzo@gmail.com>)
List psycopg
On Mon, Jan 2, 2017 at 3:35 PM, Adrian Klaver <adrian.klaver@aklaver.com> wrote:
>>> Same code across network, client in Bellingham WA, server in Fremont CA:
>>>
>>> Without autocommit:
>>>
>>> In [51]: %timeit -n 10 cur.executemany(sql, l)
>>> 10 loops, best of 3: 8.22 s per loop
>>>
>>>
>>> With autocommit:
>>>
>>> In [56]: %timeit -n 10 cur.executemany(sql, l)
>>> 10 loops, best of 3: 8.38 s per loop
>>
>> Adrian, have you got a benchmark "classic vs. joined" on remote
>> network? Thank you.
>
> With NRECS=10000 and page size=100:
>
> aklaver(at)tito:~> python psycopg_executemany.py -p 100
> classic: 427.618795156 sec
> joined: 7.55754685402 sec

This is really interesting.  I have long been using a utility I put
together to insert using BINARY COPY.  In fact I just brushed it up a
bit and put it on PyPi: <https://pypi.python.org/pypi/pgcopy>

I'm curious to run a benchmark against the improved executemany.  I'd
hoped that pgcopy would be generally useful, but it may no longer be
necessary.  A fast executemany() certainly suits more use cases.


Best,
Aryeh Leib Taurog


psycopg by date:

Previous
From: Jim Nasby
Date:
Subject: Re: [psycopg] Turbo ODBC
Next
From: Daniel Fortunov
Date:
Subject: Re: [psycopg] Nested transactions support for code composability