Thread: read block size
Is it possible to tweak the size of a block that postgres tries to read when doing a sequential scan? It looks like it reads in fairly small blocks, and I'd expect a fairly significant boost in i/o performance when doing a large (multi-gig) sequential scan if larger blocks were used. Mike Stone
Michael Stone wrote: > Is it possible to tweak the size of a block that postgres tries to read > when doing a sequential scan? It looks like it reads in fairly small > blocks, and I'd expect a fairly significant boost in i/o performance > when doing a large (multi-gig) sequential scan if larger blocks were > used. > > Mike Stone I believe postgres reads in one database page at a time, which defaults to 8k IIRC. If you want bigger, you could recompile and set the default page size to something else. There has been discussion about changing the reading/writing code to be able to handle multiple pages at once, (using something like vread()) but I don't know that it has been implemented. Also, this would hurt cases where you can terminate as sequential scan early. And if the OS is doing it's job right, it will already do some read-ahead for you. John =:->
Attachment
On Tue, Jun 28, 2005 at 12:02:55PM -0500, John A Meinel wrote: >There has been discussion about changing the reading/writing code to be >able to handle multiple pages at once, (using something like vread()) >but I don't know that it has been implemented. that sounds promising >Also, this would hurt cases where you can terminate as sequential scan >early. If you're doing a sequential scan of a 10G file in, say, 1M blocks I don't think the performance difference of reading a couple of blocks unnecessarily is going to matter. >And if the OS is doing it's job right, it will already do some >read-ahead for you. The app should have a much better idea of whether it's doing a sequential scan and won't be confused by concurrent activity. Even if the OS does readahead perfectly, you'll still get a with with larger blocks by cutting down on the syscalls. Mike Stone