Re: Merge algorithms for large numbers of "tapes" - Mailing list pgsql-hackers

From Hannu Krosing
Subject Re: Merge algorithms for large numbers of "tapes"
Date
Msg-id 1141893421.3810.5.camel@localhost.localdomain
Whole thread Raw
In response to Re: Merge algorithms for large numbers of "tapes"  ("Jim C. Nasby" <jnasby@pervasive.com>)
List pgsql-hackers
Ühel kenal päeval, K, 2006-03-08 kell 20:08, kirjutas Jim C. Nasby:

> But it will take a whole lot of those rewinds to equal the amount of
> time required by an additional pass through the data. 

I guess that missing a sector read also implies a "rewind", i.e. if you
don't process the data read from a "tape" fast enough, you will have to
wait a whole disc revolution (~== "seek time" on modern disks) before
you get the next chunk of data.

> I'll venture a
> guess that as long as you've got enough memory to still read chunks back
> in 8k blocks  that it won't be possible for a multi-pass sort to
> out-perform a one-pass sort. Especially if you also had the ability to
> do pre-fetching (not something to fuss with now, but certainly a
> possibility in the future).
>  
> In any case, what we really need is at least good models backed by good
> drive performance data.

And filesystem performance data, as postgres uses OS-s native
filesystems.

--------------
Hannu



pgsql-hackers by date:

Previous
From: "Zeugswetter Andreas DCP SD"
Date:
Subject: Re: Merge algorithms for large numbers of "tapes"
Next
From: Martijn van Oosterhout
Date:
Subject: Re: Coverity Open Source Defect Scan of PostgreSQL