Re: Multi CPU Queries - Feedback and/or suggestions wanted! - Mailing list pgsql-hackers

From Chuck McDevitt
Subject Re: Multi CPU Queries - Feedback and/or suggestions wanted!
Date
Msg-id EB48EBF3B239E948AC1E3F3780CF8F88044E92C8@MI8NYCMAIL02.Mi8.com
Whole thread Raw
In response to Re: Multi CPU Queries - Feedback and/or suggestions wanted!  ("Jeffrey Baker" <jwbaker@gmail.com>)
Responses Re: Multi CPU Queries - Feedback and/or suggestions wanted!
List pgsql-hackers

There is a problem trying to make Postgres do these things in Parallel.

 

The backend code isn’t thread-safe, so doing a multi-thread implementation requires quite  a bit of work.

 

Using multiple processes has its own problems:  The whole way locking works equates one process with one transaction (The proc table is one entry per process).  Processes would conflict on locks, deadlocking themselves, as well as many other problems.

 

It’s all a good idea, but the work is probably far more than you expect.

 

Async I/O might be easier, if you used pThreads, which is mostly portable, but not to all platforms.  (Yes, they do work on Windows)

 

From: pgsql-hackers-owner@postgresql.org [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Jeffrey Baker
Sent: 2008-10-20 22:25
To: Julius Stroffek
Cc: pgsql-hackers@postgresql.org; Dano Vojtek
Subject: Re: [HACKERS] Multi CPU Queries - Feedback and/or suggestions wanted!

 

On Mon, Oct 20, 2008 at 12:05 PM, Julius Stroffek <Julius.Stroffek@sun.com> wrote:

Topics that seem to be of interest and most of them were already
discussed at developers meeting in Ottawa are
1.) parallel sorts
2.) parallel query execution
3.) asynchronous I/O
4.) parallel COPY
5.) parallel pg_dump
6.) using threads for parallel processing

[...]

2.)
Different subtrees (or nodes) of the plan could be executed in parallel
on different CPUs and the results of this subtrees could be requested
either synchronously or asynchronously.


I don't see why multiple CPUs can't work on the same node of a plan.  For instance, consider a node involving a scan with an expensive condition, like UTF-8 string length.  If you have four CPUs you can bring to bear, each CPU could take every fourth page, computing the expensive condition for each tuple in that page.  The results of the scan can be retired asynchronously to the next node above.

-jwb

pgsql-hackers by date:

Previous
From: tomas@tuxteam.de
Date:
Subject: Re: Debian no longer dumps cores?
Next
From: Simon Riggs
Date:
Subject: Re: [GENERAL] Hot Standby utility and administrator functions