Re: Performance impact of record sizes - Mailing list pgsql-admin

From John Moore
Subject Re: Performance impact of record sizes
Date
Msg-id 5.1.1.6.2.20020704131448.02288fa8@pop3.norton.antivirus
Whole thread Raw
In response to Re: Performance impact of record sizes  (Bruce Momjian <pgman@candle.pha.pa.us>)
Responses Re: Performance impact of record sizes
Re: Performance impact of record sizes
List pgsql-admin
>If you don't want a limit, use TEXT.  Long values are automatically
>stored in TOAST tables to avoid performance problems with sequential
>scans over long row values.


Thanks...

I wasn't quite clear enough in my question.... I am focused on OLTP
performance, and in my case the vast majority of the rows will have only a
few hundred bytes in that column, while a few (1%?) may be larger.

I assume that internally there is a fixed page size (by which I mean cache
buffer size  or disk read size)  for normal records. In my case, 99% of the
physical rows should be short, and would fit easily in whatever that size
is. So I *suspect* I want to keep the data in the physical row, rather than
using TEXT and having it stored separately from the record.. The question
is... are there any unexpected consequences. For example, if I have a whole
bunch of rows with, say, a 10K varchar field which is only populated with a
hundred or two bytes each, will it perform just as well as if that field
was defined as a 200 byte field?

A related question: is it more expensive to use varchar than fixed char
fields? I assume some additional work in physically unpacking the record.

My past experience is with Informix, and a lot with very old versions where
views were high cost, and so were varchars. Likewise, you didn't want your
physical row to exceed the size of a physical page if you could avoid it.

John





pgsql-admin by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: views: performance implication
Next
From: Bruce Momjian
Date:
Subject: Re: Performance impact of record sizes