Re: Merge algorithms for large numbers of "tapes" - Mailing list pgsql-hackers

From Greg Stark
Subject Re: Merge algorithms for large numbers of "tapes"
Date
Msg-id 87fylsmqy8.fsf@stark.xeocode.com
Whole thread Raw
In response to Re: Merge algorithms for large numbers of "tapes"  ("Luke Lonergan" <llonergan@greenplum.com>)
Responses Re: Merge algorithms for large numbers of "tapes"  ("Jim C. Nasby" <jnasby@pervasive.com>)
Re: Merge algorithms for large numbers of "tapes"  (Florian Weimer <fw@deneb.enyo.de>)
List pgsql-hackers
"Luke Lonergan" <llonergan@greenplum.com> writes:

> > I am pretty sure from this thread that PostgreSQL is not doing #1, and I
> > have no idea if it is doing #2.
> 
> Yep.  Even Knuth says that the tape goo is only interesting from a
> historical perspective and may not be relevant in an era of disk drives.

As the size of the data grows larger the behaviour of hard drives looks more
and more like tapes. The biggest factor controlling the speed of i/o
operations is how many seeks are required to complete them. Effectively
"rewinds" are still the problem it's just that the cost of rewinds becomes
constant regardless of how long the "tape" is.

That's one thing that gives me pause about the current approach of using more
tapes. It seems like ideally the user would create a temporary work space on
each spindle and the database would arrange to use no more than that number of
tapes. Then each merge operation would involve only sequential access for both
reads and writes.

-- 
greg



pgsql-hackers by date:

Previous
From: Greg Stark
Date:
Subject: Re: Coverity Open Source Defect Scan of PostgreSQL
Next
From: "Dann Corbit"
Date:
Subject: Re: Merge algorithms for large numbers of "tapes"