Re: Bulk Insert into PostgreSQL - Mailing list pgsql-hackers

From Peter Geoghegan
Subject Re: Bulk Insert into PostgreSQL
Date
Msg-id CAH2-Wz=-V=-pO9u4jEtgbSH+y8zpKrzigmpxzh3PMhjnudo3Mg@mail.gmail.com
Whole thread Raw
In response to RE: Bulk Insert into PostgreSQL  ("Tsunakawa, Takayuki" <tsunakawa.takay@jp.fujitsu.com>)
Responses RE: Bulk Insert into PostgreSQL
Re: Bulk Insert into PostgreSQL
List pgsql-hackers
On Sun, Jul 1, 2018 at 5:19 PM, Tsunakawa, Takayuki
<tsunakawa.takay@jp.fujitsu.com> wrote:
> 400 GB / 15 hours = 7.6 MB/s
>
> That looks too slow.  I experienced a similar slowness.  While our user tried to INSERT (not COPY) a billion record,
theyreported INSERTs slowed down by 10 times or so after inserting about 500 million records.  Periodic pstack runs on
Linuxshowed that the backend was busy in btree operations.  I didn't pursue the cause due to other businesses, but
theremight be something to be improved. 

What kind of data was indexed? Was it a bigserial primary key, or
something else?

--
Peter Geoghegan


pgsql-hackers by date:

Previous
From: Craig Ringer
Date:
Subject: Re: Large Commitfest items
Next
From: "Tsunakawa, Takayuki"
Date:
Subject: RE: Bulk Insert into PostgreSQL