Thread: Parallel postgresql

Parallel postgresql

From
Martin Rusoff
Date:
I was just contemplating how to make postgres parallel (for DSS 
applications)... Has anyone done work on this? It looks to me like there 
are a couple of obvious places to add parallel operation:

Stage 1) I/O , perhaps through MPIO - would improve tablescanning and 
load/unload operations. One (or more) Postgresql servers would use 
MPIO/ROMIO to access a parallel file system like PVFS or GPFS(IBM).

Stage 2) Parallel Postgres Servers, with the postmaster spawning off the 
server on a different node (possibly borrowing some code from GNU queue) 
and doing any buffer twiddling with RPC for that connection, The client 
connection would still be through the proxy on the postmaster node? (kind 
of like MOSIX)

It might be more efficient to also do a DSM for the disk buffers and 
tables, but would be more complicated, I think...

To handle the case where data will be updated, any query doing update or 
insert type actions would be restricted to run on the node with the 
postmaster. 

I don't see immediately how to make the postmasters parallel.

Thoughts? Anyone?

I was also contemplating the possibility of using some of the techniques 
of "Monet" (see monet on sourceforge) which just became open source 
relatively recently. It makes heavy use of the notion of decomposed 
storage (each atttribute stored individually) which can be a huge win for 
some kinds of queries as it dramatically reduces IO, cache misses and 
other time intensive activities. I have not given that as much thought 
yet (i.e. I haven't found the right places in the architecture for this 
yet). It might be possible to snag the plan and use that to drive monet 
itself (less coding?) instead of trying to build it into the executor 
code directly.


Re: Parallel postgresql

From
Bruce Momjian
Date:
Martin Rusoff wrote:
> I was just contemplating how to make postgres parallel (for DSS 
> applications)... Has anyone done work on this? It looks to me like there 
> are a couple of obvious places to add parallel operation:
> 
> Stage 1) I/O , perhaps through MPIO - would improve tablescanning and 
> load/unload operations. One (or more) Postgresql servers would use 
> MPIO/ROMIO to access a parallel file system like PVFS or GPFS(IBM).
> 
> Stage 2) Parallel Postgres Servers, with the postmaster spawning off the 
> server on a different node (possibly borrowing some code from GNU queue) 
> and doing any buffer twiddling with RPC for that connection, The client 
> connection would still be through the proxy on the postmaster node? (kind 
> of like MOSIX)

One idea would be to throw parts of the executor (like a table sort) to
different machines or to different processors on the same machine,
perhaps via dblink.  You could use threads to send several requests and
wait for their results.

Threading the entire backend would be hard, but we could thread some
parts of it by having slave backends doing some of the work in parallel.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Parallel postgresql

From
Bruce Momjian
Date:
Hans-J�rgen Sch�nig wrote:
> >>Stage 2) Parallel Postgres Servers, with the postmaster spawning off the 
> >>server on a different node (possibly borrowing some code from GNU queue) 
> >>and doing any buffer twiddling with RPC for that connection, The client 
> >>connection would still be through the proxy on the postmaster node? (kind 
> >>of like MOSIX)
> > 
> > 
> > One idea would be to throw parts of the executor (like a table sort) to
> > different machines or to different processors on the same machine,
> > perhaps via dblink.  You could use threads to send several requests and
> > wait for their results.
> > 
> > Threading the entire backend would be hard, but we could thread some
> > parts of it by having slave backends doing some of the work in parallel.
> 
> 
> 
> This would be nice - especially for huge queries needed in warehouses.
> Maybe it could even make sense to do things in par. if there is just one 
> machine (e.g. computing a function while a sort process is waiting for 
> I/O or so).
> 
> Which operations can run in par.? What do you think?
> I guess implementing something like that means 20 years more work on the 
> planner ...

My guess is that we would have to have the user tell us which things
they want in parallel somehow.  Of course, the child backend has to
parse/plan/execute the query, and pass the data up to the parent, so you
have to pick things where this overhead is acceptable.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Parallel postgresql

From
Hans-Jürgen Schönig
Date:
Bruce Momjian wrote:
> Martin Rusoff wrote:
> 
>>I was just contemplating how to make postgres parallel (for DSS 
>>applications)... Has anyone done work on this? It looks to me like there 
>>are a couple of obvious places to add parallel operation:
>>
>>Stage 1) I/O , perhaps through MPIO - would improve tablescanning and 
>>load/unload operations. One (or more) Postgresql servers would use 
>>MPIO/ROMIO to access a parallel file system like PVFS or GPFS(IBM).
>>
>>Stage 2) Parallel Postgres Servers, with the postmaster spawning off the 
>>server on a different node (possibly borrowing some code from GNU queue) 
>>and doing any buffer twiddling with RPC for that connection, The client 
>>connection would still be through the proxy on the postmaster node? (kind 
>>of like MOSIX)
> 
> 
> One idea would be to throw parts of the executor (like a table sort) to
> different machines or to different processors on the same machine,
> perhaps via dblink.  You could use threads to send several requests and
> wait for their results.
> 
> Threading the entire backend would be hard, but we could thread some
> parts of it by having slave backends doing some of the work in parallel.



This would be nice - especially for huge queries needed in warehouses.
Maybe it could even make sense to do things in par. if there is just one 
machine (e.g. computing a function while a sort process is waiting for 
I/O or so).

Which operations can run in par.? What do you think?
I guess implementing something like that means 20 years more work on the 
planner ...

By the way: NCR has a quite nice solution for problems like that. 
Teradata has been designed to run everything on multiple nodes (they 
call it AMPs).
Teradata has been designed for A LOT OF data and reporting purposes.
There are just three problems:- not Open Source- ~$70k / node- runs on Windows and NCR's UNIX implementation.

Is anybody familiar with Teradata?
Hans







-- 
Cybertec Geschwinde u Schoenig
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/2952/30706 or +43/660/816 40 77
www.cybertec.at, www.postgresql.at, kernel.cybertec.at




Re: Parallel postgresql

From
Bruce Momjian
Date:
Hans-J�rgen Sch�nig wrote:
> Bruce Momjian wrote:
> > Martin Rusoff wrote:
> > 
> >>I was just contemplating how to make postgres parallel (for DSS 
> >>applications)... Has anyone done work on this? It looks to me like there 
> >>are a couple of obvious places to add parallel operation:
> >>
> >>Stage 1) I/O , perhaps through MPIO - would improve tablescanning and 
> >>load/unload operations. One (or more) Postgresql servers would use 
> >>MPIO/ROMIO to access a parallel file system like PVFS or GPFS(IBM).
> >>
> >>Stage 2) Parallel Postgres Servers, with the postmaster spawning off the 
> >>server on a different node (possibly borrowing some code from GNU queue) 
> >>and doing any buffer twiddling with RPC for that connection, The client 
> >>connection would still be through the proxy on the postmaster node? (kind 
> >>of like MOSIX)
> > 
> > 
> > One idea would be to throw parts of the executor (like a table sort) to
> > different machines or to different processors on the same machine,
> > perhaps via dblink.  You could use threads to send several requests and
> > wait for their results.
> > 
> > Threading the entire backend would be hard, but we could thread some
> > parts of it by having slave backends doing some of the work in parallel.
> 
> 
> 
> This would be nice - especially for huge queries needed in warehouses.
> Maybe it could even make sense to do things in par. if there is just one 
> machine (e.g. computing a function while a sort process is waiting for 
> I/O or so).
> 
> Which operations can run in par.? What do you think?
> I guess implementing something like that means 20 years more work on the 
> planner ...

I would think a very expensive function call could already be done in
this way, though you can't do SQL in the function because the visiblity
rules and commit/abort handling aren't pass down to the child --- that
would severely limit what could be done in a child --- the only logical
thing would be some function that calls an external program to send
email or something.  We could implement something to pass the parent pid
down to the child, and the child could use that for visibility rules and
maybe commit/abort if we used the parent xid to stamp any rows modified
by the child.  Of course, anything I/O bound wouldn't benefit from this.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073