Re: [PERFORM] Insert Concurrency - Mailing list pgsql-performance

From Scott Marlowe
Subject Re: [PERFORM] Insert Concurrency
Date
Msg-id CAOR=d=292ziEnZnj245dtPwmdo5BhHyudzB_q3yM+tTwDAGmDA@mail.gmail.com
Whole thread Raw
In response to [PERFORM] Insert Concurrency  (ROBERT PRICE <rprice504@hotmail.com>)
Responses Re: [PERFORM] Insert Concurrency  (David McKelvie <dmck@interactive.co.uk>)
Re: [PERFORM] Insert Concurrency  (ROBERT PRICE <rprice504@hotmail.com>)
List pgsql-performance
On Mon, Apr 17, 2017 at 8:55 PM, ROBERT PRICE <rprice504@hotmail.com> wrote:
> I come from an Oracle background and am porting an application to postgres.
> App has a table that will contain 100 million rows and has to be loaded by a
> process that reads messages off a SQS queue and makes web service calls to
> insert records one row at a time in a postgres RDS instance. I know slow by
> slow is not the ideal approach but I was wondering if postgres had
> partitioning or other ways to tune concurrent insert statements. Process
> will run 50 - 100 concurrent threads.

It's not uncommon to look for an Oracle solution while working with
another rdbms. Often what works in one engine doesn't work the same or
as well in another.

Is it possible for you to roll up some of these inserts into a single
transaction in some way? Even inserting ten rows at a time instead of
one at a time can make a big difference in your insert rate. Being
able to roll up 100 or more together even more so.

Another possibility is to insert them into a smaller table, then have
a process every so often come along, and insert all the rows there and
then delete them or truncate the table (for truncate you'll need to
lock the table to not lose rows).

--
To understand recursion, one must first understand recursion.


pgsql-performance by date:

Previous
From: Claudio Freire
Date:
Subject: Re: [PERFORM] Insert Concurrency
Next
From: David McKelvie
Date:
Subject: Re: [PERFORM] Insert Concurrency