Thread: Page Size in Future Releases
Will a increase in the size of a data page increase performance of a database with large records? I have records about 881 byte + 40 byte (header) = 921. 8k page size / 921 bytes per record is ONLY 8 records........... Comments are welcome......... k=n^r/ck, SCJP _________________________________________________________________ MSN 8 with e-mail virus protection service: 2 months FREE* http://join.msn.com/?page=features/virus
On Friday 21 Mar 2003 2:15 am, Kendrick C. Wilson wrote: > Will a increase in the size of a data page increase performance of a > database with large records? > > I have records about 881 byte + 40 byte (header) = 921. > > 8k page size / 921 bytes per record is ONLY 8 records........... You can tweak it yourself at compile time in some header file and that should work but that is a point of diminising results as far as hackers are concerned. One reason I know where it would help is getting postgresql to use tons of shaerd memory. Right now postgresql can not use much beyond 250MB(??) because number of shared buffer are int or something. So if you know your reconrds are large, are often manipulated and your OS is not so good at file caching, then increasing page size might help. Given how good unices are in general in terms of file and memory handling, I woudl say you should not do it unless your average record size is greater than 8K, something like a large genome sequence or so. YMMV.. Shridhar
> > I have records about 881 byte + 40 byte (header) = 921. > > > > 8k page size / 921 bytes per record is ONLY 8 records........... > > You can tweak it yourself at compile time in some header file and that should > work but that is a point of diminising results as far as hackers are > concerned. As far as I'm aware the 8k page size has nothing to do with speed and everything to do with atomic writes. You can't be guaranteed that the O/S and hard drive controller will write anything more than 8K in an atomic block... Chris
Shridar, > One reason I know where it would help is getting postgresql to use tons of > shaerd memory. Right now postgresql can not use much beyond 250MB(??) > because number of shared buffer are int or something. So if you know your > reconrds are large, are often manipulated and your OS is not so good at > file caching, then increasing page size might help. Um, two fallacies: 1) You can allocate as much shared buffer ram as you want. The maxium I've tested is 300mb, personally, but I know of no hard limit. 2) However, allocating more shared buffer ram ... in fact anything beyond about 40mb ... has never been shown by anyone on this list to be helpful for any size database, and sometimes the contrary. -- Josh Berkus Aglio Database Solutions San Francisco
"Kendrick C. Wilson" <kendrick_wilson@hotmail.com> writes: > Will a increase in the size of a data page increase performance of a > database with large records? Probably not; in fact the increased WAL overhead could make it a net loss. But feel free to try changing BLCKSZ to see how it works for you. regards, tom lane
Am Samstag, 22. März 2003 01:15 schrieb Tom Lane: > "Kendrick C. Wilson" <kendrick_wilson@hotmail.com> writes: > > Will a increase in the size of a data page increase performance of a > > database with large records? > > Probably not; in fact the increased WAL overhead could make it a net > loss. But feel free to try changing BLCKSZ to see how it works for you. I've several database with 32KB and 8KB, and though the results are not really comparable due to slight different hardware,I've the feeling that 8KB buffers work best in most cases. The only difference I noticed are large objects whichseem to work slightly better with larger sizes. Regards, Mario Weilguni