Thread: Packed short varlenas, what next?

Packed short varlenas, what next?

From
Gregory Stark
Date:
I'm really curious to know how people feel about the varlena patch. In
particular I know these issues may elicit comment:

1) Do we really need a special case for little-endian machines? I think it  would be trivial to add but having two code
pathsmay be annoying to  maintain. The flip side is it would make it easier to read varlena headers  in gdb which I
foundkind of annoying with them in network byte order.
 

2) How do people feel about the way I inlined most of the VARATT_IS_SHORT  cases in heaptuple.c. I tried at first to
hidethat all in the att_align  and att_addlength macros but a) it would never be possible to hide most of  it and b) it
wouldrequire a few more redundant tests.
 

3) How do people feel about not allowing an escape hatch for new types and  explicitly exempting int2vector and
oidvector.The alternatives are either  a) adding a new column to pg_type and pg_attribute and setting that on  catalog
attributesthat are accessed through GETSTRUCT (as the first  varlena in the table) and also setting it on oidvector and
int2vector because they don't call pg_detoast_datum(). Or b) fixing int2vector and  oidvector to pass through
pg_detoast_datumand fix all the accesses to the  first int2vector/oidvector in every catalog table to use fastgetattr
instead.or c) keep things as they are now.
 

4) Should I start hitting the more heavily trod codepaths in text.c and  numeric.c to avoid detoasting short varlenas?
Themacro api is not quite  complete enough for this yet so it may make sense to tackle at least one  code site before
mergingit to be sure we have a workable api for data  types that want to avoid unnecessary detoasting.
 

The latest patch is at 
http://community.enterprisedb.com/varlena/patch-varvarlena-12.patch.gz

I've been doing some benchmarking, I see a 9.7% space saving on the
Benchmark-SQL 5.2 schema which translates into about a 8% performance gain.
The DBT2 benchmarks show a smaller 5.3% space saving because we had already
done a lot more optimizing of the schema. 

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com


Re: Packed short varlenas, what next?

From
Peter Eisentraut
Date:
Gregory Stark wrote:
> I'm really curious to know how people feel about the varlena patch.

As I has mentioned earlier, I'm missing a plan to allow 8-byte varlena 
sizes.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/


Re: Packed short varlenas, what next?

From
Tom Lane
Date:
Peter Eisentraut <peter_e@gmx.net> writes:
> As I has mentioned earlier, I'm missing a plan to allow 8-byte varlena 
> sizes.

I don't think it's entirely fair to expect this patch to solve that
problem.  In the first place, that is not what the patch's goal is,
but merely tangentially related to the same code.  In the second place,
I don't see any way we could possibly do that without wide-ranging code
changes; to take just one point, much of the code that works with
varlenas uses "int" or "int32" variables to compute sizes.  So it would
certainly expand the scope of the patch quite a lot to try to put that
in place, and it's mighty late in the devel cycle to be thinking about
that sort of thing.

For the moment I think it should be enough to expect that the patch
allow for more than one format of TOAST pointer, so that if we ever did
try to support 8-byte varlenas, there'd be a way to represent them
on-disk.  Some of the alternatives that we discussed last year used up
all of the "prefix space" and wouldn't have allowed expansion in this
particular direction.
        regards, tom lane


Re: Packed short varlenas, what next?

From
Gregory Stark
Date:
Tom Lane <tgl@sss.pgh.pa.us> writes:

Tom Lane <tgl@sss.pgh.pa.us> writes:

> Peter Eisentraut <peter_e@gmx.net> writes:
> > As I has mentioned earlier, I'm missing a plan to allow 8-byte varlena 
> > sizes.

Hm, change VARHDRSZ to 8 and change all the varlena data types to have an
int64 leading field? I suppose it could be done, and it would give us more
bits to play with in the codespace since then we could limit 4-byte headers to
128M or something. But yes, there are tons of places in the code that
currently do arithmetic on sizes using integers -- and often signed integers
at that.

But that's a change to what a *detoasted* datum looks like. My patch mainly
changes what a *toasted* datum looks like. (Admittedly after making more data
fall in that category than previously.) The only change to a detoasted datum
is that the size is stored in network byte order.

> For the moment I think it should be enough to expect that the patch
> allow for more than one format of TOAST pointer, so that if we ever did
> try to support 8-byte varlenas, there'd be a way to represent them
> on-disk.  Some of the alternatives that we discussed last year used up
> all of the "prefix space" and wouldn't have allowed expansion in this
> particular direction.

Ah yes, I had intended to include the bit-pattern choice in the list as well.

There are two issues there:

1) The lack of 2-byte patterns which is quite annoying as really *any* on-disk  datum would fit in a 2-byte header
varlena.However it became quite tricky  to convert things to 2-byte headers, especially for compressed data, it  would
havemade for a much bigger patch to tuptoaster.c and pg_lzcompress.  And I became convinced that it was best to get the
mostimportant gain  first, saving 2 bytes on wider tuples is less important than 3-6 bytes on  narrow tuples.
 

2) The choice of encoding for toast pointers. Note that currently they don't  actually save *any* space due to the
alignmentrequirements of the OIDs.  which seems kind of silly but I didn't see any reasonable way around that.  The
flipside is that gives us 24 bits to play with if we want to have  different types of external pointers or more
meta-informationabout the  toasted data.
 
  One of the details here is that I didn't store the compressed bit anywhere  for external toast pointers. I just made
themacro compare the rawsize and  extsize. If that strikes anyone as evil we could take a byte out of those 3  padding
bytesfor flags and store a compressed flag there.
 

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com



Re: Packed short varlenas, what next?

From
Tom Lane
Date:
Gregory Stark <gsstark@mit.edu> writes:
> 2) The choice of encoding for toast pointers. Note that currently they don't
>    actually save *any* space due to the alignment requirements of the OIDs.
>    which seems kind of silly but I didn't see any reasonable way around that.

I was expecting that we'd store them as unaligned and memcpy a toast
pointer into a suitably-aligned local variable any time we wanted to
look at its contents.  Detoasting is expensive enough that that's not
going to add any noticeable percentage time-overhead, and not having to
align toast pointers should be a pretty good percentage space-saving,
seeing that they're only 20-some bytes anyway.

>    One of the details here is that I didn't store the compressed bit anywhere
>    for external toast pointers. I just made the macro compare the rawsize and
>    extsize. If that strikes anyone as evil we could take a byte out of those 3
>    padding bytes for flags and store a compressed flag there.

I believe this is OK since the toast code doesn't compress unless space
is actually saved.  You should put a note in the code that that behavior
is now necessary for correctness, not just a performance tweak.
        regards, tom lane


Re: Packed short varlenas, what next?

From
Josh Berkus
Date:
Greg,

> I'm really curious to know how people feel about the varlena patch. In
> particular I know these issues may elicit comment:

Haven't tested yet.  Will let you know when I do.

-- 
Josh Berkus
PostgreSQL @ Sun
San Francisco


Re: Packed short varlenas, what next?

From
Tom Lane
Date:
Gregory Stark <stark@enterprisedb.com> writes:
> I'm really curious to know how people feel about the varlena patch.

One thing I think we could do immediately is apply the change to replace
"VARATT_SIZEP(x) = len" with "SET_VARSIZE(x, len)" --- that would
considerably reduce the size of the patch and allow people to focus on
the important changes instead of underbrush.  Barring objection I'll go
ahead and do that today.
        regards, tom lane


Re: Packed short varlenas, what next?

From
Tom Lane
Date:
I wrote:
> Gregory Stark <stark@enterprisedb.com> writes:
>> I'm really curious to know how people feel about the varlena patch.

> One thing I think we could do immediately is apply the change to replace
> "VARATT_SIZEP(x) = len" with "SET_VARSIZE(x, len)" --- that would
> considerably reduce the size of the patch and allow people to focus on
> the important changes instead of underbrush.  Barring objection I'll go
> ahead and do that today.

I've committed this, but in testing with a hack that does ntohl() in the
VARSIZE macro and vice-versa in SET_VARSIZE, I find that core passes
regression but several contrib modules do not.  It looks like the
contrib modules were depending on various random structs being
compatible with varlena, while not exposing that dependence in ways that
either of us caught :-(.

I'll work on cleaning up the remaining mess tomorrow, but I think that
we may need to think twice about whether it's OK to expect that every
datatype with typlen = -1 will be compatible with the New Rules.  I'm
back to wondering if maybe only types with typalign 'c' should get
caught up in the changes.
        regards, tom lane


Re: Packed short varlenas, what next?

From
Gregory Stark
Date:
"Tom Lane" <tgl@sss.pgh.pa.us> writes:

> I've committed this, but in testing with a hack that does ntohl() in the
> VARSIZE macro and vice-versa in SET_VARSIZE, I find that core passes
> regression but several contrib modules do not.  It looks like the
> contrib modules were depending on various random structs being
> compatible with varlena, while not exposing that dependence in ways that
> either of us caught :-(.

I just noticed that last night myself. In particular the GIST modules seems to
be a major problem. they define dozens of new objects, many of which are just
passing around C data structures internally but some of which are objects
which get stored in the database. I have no idea which are which and which
ones are varlenas.

Worse, it uses PG_GETARG_POINTER() and explicitly calls PG_DETOAST_DATUM() in
the few places it assumes finding toasted data is possible. That's even harder
to track down.

I can send up a patch for the data types I fixed last night.

> I'll work on cleaning up the remaining mess tomorrow, but I think that
> we may need to think twice about whether it's OK to expect that every
> datatype with typlen = -1 will be compatible with the New Rules.  I'm
> back to wondering if maybe only types with typalign 'c' should get
> caught up in the changes.


I don't think we can key off typalign='c'. That would entail changing varlenas
to typalign 'c' which would throw off other consumers of the typalign which
expect it to be the alignment of the detoasted datum. Moreover I still align
them when they have the full 4-byte header by using the typalign.

I think we would want to introduce a new column, or maybe a new attlen value,
or a new typalign value.

I was thinking about that though and it's not so simple. It's easy enough not
to convert to short varlena for data types that don't assert that they support
the packed format. That's not a problem. That takes care of data types which
don't call pg_detoast_datum().

But not storing the varlena header in network byte order sometimes would be
quite tricky. There are a great many places that call VARSIZE that don't look
at the attalign or even have it handy.

If we made it a new attlen value we could have two different macros, but that
will be another quite large patch. It would mean hitting all those datatypes
all over again to change every instance of VARSIZE into NEWVARSIZE or
something like that. Plus all the sites in the core that call VARSIZE would
need to check attlen and call the right one.

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com


Re: Packed short varlenas, what next?

From
Tom Lane
Date:
Gregory Stark <stark@enterprisedb.com> writes:
> I just noticed that last night myself. In particular the GIST modules seems to
> be a major problem. they define dozens of new objects, many of which are just
> passing around C data structures internally but some of which are objects
> which get stored in the database. I have no idea which are which and which
> ones are varlenas.

FWIW, when I went to bed last night I had hstore and intarray working,
but was still fooling with ltree.  Didn't get to the others yet.
        regards, tom lane


Re: Packed short varlenas, what next?

From
Gregory Stark
Date:
"Tom Lane" <tgl@sss.pgh.pa.us> writes:

> FWIW, when I went to bed last night I had hstore and intarray working,
> but was still fooling with ltree.  Didn't get to the others yet.

Thanks, I was getting lost in the gist stuff. 

I've disabled packed varlenas for user-defined data types and find tsearch2
and _int still fail. tsearch2 requires the small patch attached. _int seems to
be unrelated.

To make them work with packed varlenas would require ensuring that they're
always detoasted instead of using GETARG_POINTER. I'll look at that tomorrow.
Er, today.

(It would be nice if we made it possible to define gist indexable data types
without so much copy/pasted code though. These data types are all just
defining some basic operations and then copy/pasting the same algorithms to
implement picksplit and the other index support functions in terms of those
basic operations.)


Index: contrib/tsearch2/ts_cfg.c
===================================================================
RCS file: /home/stark/src/REPOSITORY/pgsql/contrib/tsearch2/ts_cfg.c,v
retrieving revision 1.22
diff -c -r1.22 ts_cfg.c
*** contrib/tsearch2/ts_cfg.c    27 Feb 2007 23:48:06 -0000    1.22
--- contrib/tsearch2/ts_cfg.c    1 Mar 2007 04:19:02 -0000
***************
*** 62,70 ****         ts_error(ERROR, "SPI_execp return %d", stat);     if (SPI_processed > 0)     {
!         prsname = (text *) DatumGetPointer(
!                                            SPI_getbinval(SPI_tuptable->vals[0], SPI_tuptable->tupdesc, 1, &isnull)
!             );         oldcontext = MemoryContextSwitchTo(TopMemoryContext);         prsname = ptextdup(prsname);
   MemoryContextSwitchTo(oldcontext);
 
--- 62,68 ----         ts_error(ERROR, "SPI_execp return %d", stat);     if (SPI_processed > 0)     {
!         prsname = DatumGetTextP(SPI_getbinval(SPI_tuptable->vals[0], SPI_tuptable->tupdesc, 1, &isnull));
oldcontext= MemoryContextSwitchTo(TopMemoryContext);         prsname = ptextdup(prsname);
MemoryContextSwitchTo(oldcontext);

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com


Re: Packed short varlenas, what next?

From
Tom Lane
Date:
Gregory Stark <stark@enterprisedb.com> writes:
> I've disabled packed varlenas for user-defined data types and find tsearch2
> and _int still fail. tsearch2 requires the small patch attached. _int seems to
> be unrelated.

As of when?  I committed fixes earlier tonight that seem to handle the
case of VARSIZE-is-ntohl.

The patch you suggest is orthogonal to what I did; it looks like it
might be right, but regression passes without it, so what was your test
case that led you to it?
        regards, tom lane


Re: Packed short varlenas, what next?

From
Gregory Stark
Date:
"Tom Lane" <tgl@sss.pgh.pa.us> writes:

> Gregory Stark <stark@enterprisedb.com> writes:
>> I've disabled packed varlenas for user-defined data types and find tsearch2
>> and _int still fail. tsearch2 requires the small patch attached. _int seems to
>> be unrelated.
>
> As of when?  I committed fixes earlier tonight that seem to handle the
> case of VARSIZE-is-ntohl.
>
> The patch you suggest is orthogonal to what I did; it looks like it
> might be right, but regression passes without it, so what was your test
> case that led you to it?

I'm running the regression tests with the full packed varlena changes except
that I've modified it not to pack user defined data types (typid >
FirstNormalObjectId). So all varlenas need to go through detoast_datum if they
come out of a heatuple even if (especially if) they used to be too small to be
toasted.

In fact I think the line I posted is actually a bug anyways. I'm unclear what
the text field it's fetching represents and maybe it's usually small, but it
looks like there's nothing stopping it from being large enough to be toasted
in theory.

To get tsearch et al to work with packed varlena headers without disabling
them for user defined data types will require a lot more detoast_datum calls
throughout the gist data types (or defining proper GETARG macros for them).

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com


Re: Packed short varlenas, what next?

From
"Denis Lussier"
Date:
BenchmarkSQL is open source, but, I don't think anyone has published version 5.2 yet on pgFoundry.   Amongst other goodies, version 5.2 allows for the running of Java based tpcC and/or tpcB like benchmarks from the command line or the cutsie gui.   We've also added consistency checks to the end of the tpc-c run (which mysql always fails).
 
Affan is coming out shortly with version 5.3, he'll publish by early next week.
 
As a side note and different topic:  The tpcB is an example of a verrryy disk intensive little transaction.  Running it with a mocked up version COMMIT NOWAIT produces a 4x performance increase for disk setups where fsync is not "free".
 
--Luss

 
On 2/27/07, Gregory Stark <stark@enterprisedb.com> wrote:


I've been doing some benchmarking, I see a 9.7% space saving on the
Benchmark-SQL 5.2 schema which translates into about a 8% performance gain.
The DBT2 benchmarks show a smaller 5.3% space saving because we had already
done a lot more optimizing of the schema.