On Wed, Jan 16, 2013 at 11:56:21PM -0300, Claudio Freire wrote:
> On Wed, Jan 16, 2013 at 11:44 PM, Bruce Momjian <bruce@momjian.us> wrote:
> > On Wed, Jan 16, 2013 at 05:04:05PM -0800, Jeff Janes wrote:
> >> On Tuesday, January 15, 2013, Stephen Frost wrote:
> >>
> >> * Gavin Flower (GavinFlower@archidevsys.co.nz) wrote:
> >> > How about being aware of multiple spindles - so if the requested
> >> > data covers multiple spindles, then data could be extracted in
> >> > parallel. This may, or may not, involve multiple I/O channels?
> >>
> >> Yes, this should dovetail with partitioning and tablespaces to pick up
> >> on exactly that.
> >>
> >>
> >> I'd rather not have the benefits of parallelism be tied to partitioning if we
> >> can help it. Hopefully implementing parallelism in core would result in
> >> something more transparent than that.
> >
> > We will need a way to know we are not saturating the I/O channel with
> > random I/O that could have been sequential if it was single-threaded.
> > Tablespaces give us that info; not sure what else does.
>
> I do also think tablespaces are a safe bet. But it wouldn't help for
> parallelizing sorts or other operations with tempfiles (tempfiles
> reside on the same tablespace), or even over a single table (same
We can round-robin temp tablespace usage if you list multiple entries.
> tablespace again). And when the query is CPU-bound, it could be
> parallelized by simply making a multithreaded memory sort. Well, not
> so simply, but I do think it's an important building block.
Yes, and detecting when to use these parallel features will be hard.
-- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB
http://enterprisedb.com
+ It's impossible for everything to be true. +