Thread: 8K block limit

8K block limit

From
"Ken Mort"
Date:
Already asked this in the other lists so here.

I need to store some polygons that are larger than 8K.
I was reading in hackers archives talk about a solution
to the 8K limit. Was anything done? If so, what do I
need to do to solve my problem?


Regards,
Kenneth R. Mort  <kenmort@mort.port.net>
TreeTop Research
Brooklyn, NY, USA


Re: [HACKERS] 8K block limit

From
Peter T Mount
Date:
On Mon, 15 Feb 1999, Ken Mort wrote:

> Already asked this in the other lists so here.
> 
> I need to store some polygons that are larger than 8K.
> I was reading in hackers archives talk about a solution
> to the 8K limit. Was anything done? If so, what do I
> need to do to solve my problem?

There is an option that can be set at compile time to set the block size
from 8k to something like 32 or 64K (not sure which).

Note: Changing the block size may have a performance hit however.

Another way is to break the polygons down into smaller pieces.

Peter

--      Peter T Mount peter@retep.org.uk     Main Homepage: http://www.retep.org.uk
PostgreSQL JDBC Faq: http://www.retep.org.uk/postgresJava PDF Generator: http://www.retep.org.uk/pdf



Re: [HACKERS] 8K block limit

From
Tatsuo Ishii
Date:
> On Mon, 15 Feb 1999, Ken Mort wrote:
> 
> > Already asked this in the other lists so here.
> > 
> > I need to store some polygons that are larger than 8K.
> > I was reading in hackers archives talk about a solution
> > to the 8K limit. Was anything done? If so, what do I
> > need to do to solve my problem?
> 
> There is an option that can be set at compile time to set the block size
> from 8k to something like 32 or 64K (not sure which).

I think it is 32k. (tuple offset in a block is limited to 15 bits)

> Note: Changing the block size may have a performance hit however.

Why?
---
Tatsuo Ishii


Re: [HACKERS] 8K block limit

From
Peter T Mount
Date:
On Wed, 17 Feb 1999, Tatsuo Ishii wrote:

> > On Mon, 15 Feb 1999, Ken Mort wrote:
> > 
> > > Already asked this in the other lists so here.
> > > 
> > > I need to store some polygons that are larger than 8K.
> > > I was reading in hackers archives talk about a solution
> > > to the 8K limit. Was anything done? If so, what do I
> > > need to do to solve my problem?
> > 
> > There is an option that can be set at compile time to set the block size
> > from 8k to something like 32 or 64K (not sure which).
> 
> I think it is 32k. (tuple offset in a block is limited to 15 bits)
> 
> > Note: Changing the block size may have a performance hit however.
> 
> Why?

I think some file systems are more optimised for 8K blocks. I may be
thinking on the original reason for the 8k limit in the first place, but I
remember there was discussions about this when the block size was altered.

Peter

-- 
Peter Mount, IT Section
petermount@it.maidstone.gov.uk
Anything I write here are my own views, and cannot be taken as being the
official words of Maidstone Borough Council




Re: [HACKERS] 8K block limit

From
Bruce Momjian
Date:
> 
> I think some file systems are more optimised for 8K blocks. I may be
> thinking on the original reason for the 8k limit in the first place, but I
> remember there was discussions about this when the block size was altered.

Yes, most UFS file systems use 8k blocks/2k fragments.  It allows write
of block in one i/o operation.

--  Bruce Momjian                        |  http://www.op.net/~candle maillist@candle.pha.pa.us            |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


RE: [HACKERS] 8K block limit

From
"Stupor Genius"
Date:
> > I think some file systems are more optimised for 8K blocks. I may be
> > thinking on the original reason for the 8k limit in the first 
> > place, but I remember there was discussions about this when the block
> > size was altered.
> 
> Yes, most UFS file systems use 8k blocks/2k fragments.  It allows write
> of block in one i/o operation.

The max is 32k because of the aforementioned 15 bits available, but I'd
be a bit cautious of trying it.  When I put this in, the highest I could
get to work on AIX was 16k.  Pushing it up to 32k caused major breakage
in the system internals.  Had to reboot the machine and fsck the file
system.  Some files were linked incorrectly, other files disappeared, etc,
a real mess.

Not sure exactly what it corrupted, but I'd try the 32k limit on a non-
production system first...

Darren


Re: [HACKERS] 8K block limit

From
Tatsuo Ishii
Date:
>> > I think some file systems are more optimised for 8K blocks. I may be
>> > thinking on the original reason for the 8k limit in the first 
>> > place, but I remember there was discussions about this when the block
>> > size was altered.
>> 
>> Yes, most UFS file systems use 8k blocks/2k fragments.  It allows write
>> of block in one i/o operation.

But modern Unixes have read/write ahead i/o if it seems a sequential
access, don't they. I did some testing on my LinuxPPC box.

0. create table t2(i int,c char(4000));
1. time psql -c "copy t2 from '/tmp/aaa'" test  (aaa has 5120 records and this will create 20MB table)
2. time psql -c "select count(*) from t2" test
3. total time of the regression test

o result of testing 1
8K: 0.02user 0.04system 3:26.20elapsed
32K: 0.03user 0.06system 0:48.25elapsed
 32K is 4 times faster than 8k!

o result of testing 2
8K: 0.02user 0.04system 6:00.31elapsed
32K: 0.04user 0.02system 1:02.13elapsed
32K is neary 6 times faster than 8k!

o result of testing 3
8K: 11.46user 9.51system 6:08.24
32K: 11.34user 9.54system 7:35.35
32K is a little bit slower than 8K?

My thought:

In my test case the tuple size is relatively large, so by using
ordinary size tuple, we may get different results. And of course
different OS may behave differently...

Another point is the access method. I only tested for seq scan. I
don't know for index scan.

Additional testings are welcome...

>The max is 32k because of the aforementioned 15 bits available, but I'd
>be a bit cautious of trying it.  When I put this in, the highest I could
>get to work on AIX was 16k.  Pushing it up to 32k caused major breakage
>in the system internals.  Had to reboot the machine and fsck the file
>system.  Some files were linked incorrectly, other files disappeared, etc,
>a real mess.
>
>Not sure exactly what it corrupted, but I'd try the 32k limit on a non-
>production system first...

I did above on 6.4.2. What kind of version are you using? Or maybe
platform dependent problem?

BTW, the biggest problem is there are some hard coded query length
limits in somewhere(for example MAX_MESSAGE_LEN in libpq-int.h). Until
these get fixed, 32K option is only useful for (possible) performance
boosting.
---
Tatsuo Ishii


Re: [HACKERS] 8K block limit

From
Vadim Mikheev
Date:
Tatsuo Ishii wrote:
> 
> But modern Unixes have read/write ahead i/o if it seems a sequential
> access, don't they. I did some testing on my LinuxPPC box.
> 
> 0. create table t2(i int,c char(4000));
> 1. time psql -c "copy t2 from '/tmp/aaa'" test
>    (aaa has 5120 records and this will create 20MB table)
> 2. time psql -c "select count(*) from t2" test
> 3. total time of the regression test
> 
> o result of testing 1
> 
>  8K: 0.02user 0.04system 3:26.20elapsed
> 32K: 0.03user 0.06system 0:48.25elapsed
> 
>   32K is 4 times faster than 8k!
> 
> o result of testing 2
> 
>  8K: 0.02user 0.04system 6:00.31elapsed
> 32K: 0.04user 0.02system 1:02.13elapsed
> 
>  32K is neary 6 times faster than 8k!

Did you use the same -B for 8K and 32K ?
You should use 4x buffers in 8K case!

Vadim


Re: [HACKERS] 8K block limit

From
Tatsuo Ishii
Date:
>Tatsuo Ishii wrote:
>> 
>> But modern Unixes have read/write ahead i/o if it seems a sequential
>> access, don't they. I did some testing on my LinuxPPC box.
>> 
>> 0. create table t2(i int,c char(4000));
>> 1. time psql -c "copy t2 from '/tmp/aaa'" test
>>    (aaa has 5120 records and this will create 20MB table)
>> 2. time psql -c "select count(*) from t2" test
>> 3. total time of the regression test
>> 
>> o result of testing 1
>> 
>>  8K: 0.02user 0.04system 3:26.20elapsed
>> 32K: 0.03user 0.06system 0:48.25elapsed
>> 
>>   32K is 4 times faster than 8k!
>> 
>> o result of testing 2
>> 
>>  8K: 0.02user 0.04system 6:00.31elapsed
>> 32K: 0.04user 0.02system 1:02.13elapsed
>> 
>>  32K is neary 6 times faster than 8k!
>
>Did you use the same -B for 8K and 32K ?
>You should use 4x buffers in 8K case!

Ok. This time I started postmaster as 'postmaster -S -i -B 256'.

test1:
0.03user 0.02system 3:21.65elapsed

test2:
0.01user 0.08system 5:30.94elapsed

a little bit faster, but no significant difference?
--
Tatsuo Ishii


Re: [HACKERS] 8K block limit

From
Vadim Mikheev
Date:
Tatsuo Ishii wrote:
> 
> >
> >Did you use the same -B for 8K and 32K ?
> >You should use 4x buffers in 8K case!
> 
> Ok. This time I started postmaster as 'postmaster -S -i -B 256'.
> 
> test1:
> 0.03user 0.02system 3:21.65elapsed
> 
> test2:
> 0.01user 0.08system 5:30.94elapsed
> 
> a little bit faster, but no significant difference?

Yes. So, 32K is sure value for a few simultaneous sessions.

Vadim


RE: [HACKERS] 8K block limit

From
"Stupor Genius"
Date:
> Additional testings are welcome...
> 
> >The max is 32k because of the aforementioned 15 bits available, but I'd
> >be a bit cautious of trying it.  When I put this in, the highest I could
> >get to work on AIX was 16k.  Pushing it up to 32k caused major breakage
> >in the system internals.  Had to reboot the machine and fsck the file
> >system.  Some files were linked incorrectly, other files 
> disappeared, etc,
> >a real mess.
> >
> >Not sure exactly what it corrupted, but I'd try the 32k limit on a non-
> >production system first...
> 
> I did above on 6.4.2. What kind of version are you using? Or maybe
> platform dependent problem?

My platform at the time was AIX 4.1.4.0 and it was an definitely AIX
that broke, not postgres.

Glad to hear it works at 32k on other systems though!

Darren