Re: [psycopg] speed concerns with executemany() - Mailing list psycopg

From Adrian Klaver
Subject Re: [psycopg] speed concerns with executemany()
Date
Msg-id 135fa407-af01-cef8-a809-8133115e6780@aklaver.com
Whole thread Raw
In response to Re: [psycopg] speed concerns with executemany()  (Christophe Pettus <xof@thebuild.com>)
Responses Re: [psycopg] speed concerns with executemany()  (Dorian Hoxha <dorian.hoxha@gmail.com>)
Re: [psycopg] speed concerns with executemany()  (mike bayer <mike_mp@zzzcomputing.com>)
List psycopg
On 12/23/2016 06:57 PM, Christophe Pettus wrote:
>
>> On Dec 23, 2016, at 18:55, Adrian Klaver <adrian.klaver@aklaver.com> wrote:
>> Alright that I get. Still the practical outcome is each INSERT is being done in a transaction (an implicit one) so
thetransaction overhead comes into play. Or am I missing something? 
>
> Nope, not missing a thing.  The theory (and it is only that) is that when they do the .executemany(), each of those
INSERTspays the transaction overhead, while if they do one big INSERT, just that one statement does. 

Just ran a quick and dirty test using IPython %timeit.

With a list of 200 tuples each which had 3 integers INSERTing into:
test=> \d psycopg_table
  Table "public.psycopg_table"
  Column |  Type   | Modifiers
--------+---------+-----------
  a      | integer |
  b      | integer |
  c      | integer |


The results where:

sql = "INSERT INTO psycopg_table VALUES(%s, %s, %s)"

Without autocommit:

In [65]: timeit -n 10 cur.executemany(sql, l)
10 loops, best of 3: 12.5 ms per loop


With autocommit:

In [72]: timeit -n 10 cur.executemany(sql, l)
10 loops, best of 3: 1.71 s per loop


>
> --
> -- Christophe Pettus
>    xof@thebuild.com
>


--
Adrian Klaver
adrian.klaver@aklaver.com


psycopg by date:

Previous
From: Jim Nasby
Date:
Subject: Re: [psycopg] speed concerns with executemany()
Next
From: Dorian Hoxha
Date:
Subject: Re: [psycopg] speed concerns with executemany()