Thread: Re: Benchmark Data requested --- pgloader CE design ideas
Improvements are welcome, but to compete in the industry, loading will need to speed up by a factor of 100.
Note that Bizgres loader already does many of these ideas and it sounds like pgloader does too.
- Luke
Msg is shrt cuz m on ma treo
-----Original Message-----
From: Dimitri Fontaine [mailto:dfontaine@hi-media.com]
Sent: Wednesday, February 06, 2008 12:41 PM Eastern Standard Time
To: pgsql-performance@postgresql.org
Cc: Greg Smith
Subject: Re: [PERFORM] Benchmark Data requested --- pgloader CE design ideas
Le mercredi 06 février 2008, Greg Smith a écrit :
> If I'm loading a TB file, odds are good I can split that into 4 or more
> vertical pieces (say rows 1-25%, 25-50%, 50-75%, 75-100%), start 4 loaders
> at once, and get way more than 1 disk worth of throughput reading.
pgloader already supports starting at any input file line number, and limit
itself to any number of reads:
-C COUNT, --count=COUNT
number of input lines to process
-F FROMCOUNT, --from=FROMCOUNT
number of input lines to skip
So you could already launch 4 pgloader processes with the same configuration
fine but different command lines arguments. It there's interest/demand, it's
easy enough for me to add those parameters as file configuration knobs too.
Still you have to pay for client to server communication instead of having the
backend read the file locally, but now maybe we begin to compete?
Regards,
--
dim
Le Wednesday 06 February 2008 18:49:56 Luke Lonergan, vous avez écrit : > Improvements are welcome, but to compete in the industry, loading will need > to speed up by a factor of 100. Oh, I meant to compete with internal COPY command instead of \copy one, not with the competition. AIUI competing with competition will need some PostgreSQL internal improvements, which I'll let the -hackers do :) > Note that Bizgres loader already does many of these ideas and it sounds > like pgloader does too. We're talking about how to improve pgloader :) -- dim