Re: An idea for parallelizing COPY within one backend - Mailing list pgsql-hackers

From A.M.
Subject Re: An idea for parallelizing COPY within one backend
Date
Msg-id 16377CE3-7581-4E94-BB3A-5846440D09CC@themactionfaction.com
Whole thread Raw
In response to Re: An idea for parallelizing COPY within one backend  ("Florian G. Pflug" <fgp@phlo.org>)
Responses Re: An idea for parallelizing COPY within one backend  (Alvaro Herrera <alvherre@commandprompt.com>)
Re: An idea for parallelizing COPY within one backend  ("Heikki Linnakangas" <heikki@enterprisedb.com>)
List pgsql-hackers
On Feb 27, 2008, at 9:11 AM, Florian G. Pflug wrote:

> Dimitri Fontaine wrote:
>> Of course, the backends still have to parse the input given by  
>> pgloader, which only pre-processes data. I'm not sure having the  
>> client prepare the data some more (binary format or whatever) is a  
>> wise idea, as you mentionned and wrt Tom's follow-up. But maybe I'm  
>> all wrong, so I'm all ears!
>
> As far as I understand, pgloader starts N threads or processes that  
> open up N individual connections to the server. In that case, moving  
> then text->binary conversion from the backend into the loader won't  
> give any
> additional performace I'd say.
>
> The reason that I'd love some within-one-backend solution is that  
> I'd allow you to utilize more than one CPU for a restore within a  
> *single* transaction. This is something that a client-side solution  
> won't be able to deliver, unless major changes to the architecture  
> of postgres happen first...

It seems like multiple backends should be able to take advantage of  
2PC for transaction safety.

Cheers,
M


pgsql-hackers by date:

Previous
From: Richard Huxton
Date:
Subject: Re: select avanced
Next
From: Brian Hurt
Date:
Subject: Re: An idea for parallelizing COPY within one backend