Re: Open 7.3 items: heap tuple header - Mailing list pgsql-hackers
From | Manfred Koizar |
---|---|
Subject | Re: Open 7.3 items: heap tuple header |
Date | |
Msg-id | 653qlu4v9f03hkbau93gpmv78sqpg6sdkn@4ax.com Whole thread Raw |
In response to | Open 7.3 items, with names (Bruce Momjian <pgman@candle.pha.pa.us>) |
Responses |
Re: Open 7.3 items: heap tuple header
|
List | pgsql-hackers |
On Fri, 16 Aug 2002 01:05:07 -0400 (EDT), Bruce Momjian <pgman@candle.pha.pa.us> wrote: > > P O S T G R E S Q L > > 7 . 3 O P E N I T E M S > >improve macros in new tuple header code (Manfred) ISTM there's no consensus about what "improve" means. I tried to start discussing this after my vacation, but apparently people had better things to do. On Wed, 07 Aug 2002 16:16:14 +0200, I wrote ("Heap tuple header issues"): :. Transaction and command ids, performance : :I offered to provide cheaper versions of GetCmin and GetCmax to be :used by the tqual routines. These version would reduce additional CPU :work from two infomask compares to one. Is this still considered an :issue? However, I don't think this would lead to any measurable difference. :. Transaction and command ids, robustness : :I'm still of the opinion that putting *more* knowledge into the SetXxx :macros is the way to go. The runaway INSERT bug could as well have :been fixed by changing SetCmax to do nothing, if HEAP_XMAX_INVALID is :set, and changing SetXmaxInvalid to set HEAP_XMAX_INVALID. Likewise I :would change SetXmax to set HEAP_XMAX_INVALID, if xid == :InvalidTransactionId, or reset it, if != (not sure about this). Same :for SetXminInvalid and SetXmin. This is the main point of disagreement: Tom Lane wants lighter macros, I want heavier macros. Which direction shall we go? :Further, I'll try to build a regression test using statement timeout :to detect runaway INSERT/UPDATE (the so called "halloween" problem). This won't hurt anyway. I'll start working on this. BTW, my changes have been criticized with words like "vague unease", "zero confidence", "very obviously not robust", "do not trust the current code at all" etc., while from day one all my patches have passed all regression tests. This makes me wonder whether there is something wrong with the regression tests ... :. Oids : :I was a bit surprised that these patches went in so smoothly, must be :due to the fact that there was a lot of work to do at that time. :Personnally I feel that these changes are more dangerous than the :Xmin/Cid/Xmax patches; and I ask the hackers to review especially :part 2, which contains changes to the executor and even to bootstrap. :. Oids, t_infomask : :There has been no comment from a tool developer. :. Oids, heap_getsysattr : :We thought that a TupleDesc parameter would have to be added for this :function. However, tests showed that heap_getsysattr is not called to :get the oid, when the tuple doesn't have an oid: "ERROR: Attribute :'oid' not found". :. Oids, micro tuning : :There are a few places, where storing an oid in a local variable might :be a little faster than fetching it several times from a heap tuple :header. However, I don't think this would lead to any measurable difference. :. Overall performance : :If Joe Conway can be talked into running OSDB benchmarks with old and :new heap tuple header format, I'll provide patches and instructions to :easily switch between versions. Or, Joe, can you tell me, what I need :to have and need to do to set up a benchmarking environment? With Joe's help (thanks again, Joe) I've managed to setup a benchmarking environment and I am continuously testing different configurations for a week now. There are issues with OSDB which I plan to bring up later when things cool down, but a first anawhat seems to show that with the reduced heap tuple header size we get a speed improvement of up to 3%, especially when the database is significantly larger than system memory. When the database size is only a small fraction of available memory, results vary so widely that I cannot tell whether the new heap tuple macros are a loss or a win. : :. CVS : :There have been a lot of "CVS broken" messages in the past few days. :When I tried : cvs -z3 log heapam.c :I got :| cvs server: failed to create lock directory for `/projects/cvsroot/pgsql/src/backend/access/heap' (/projects/cvsroot/pgsql/src/backend/access/heap/#cvs.lock):No such file or directory :| cvs server: failed to obtain dir lock in repository `/projects/cvsroot/pgsql/src/backend/access/heap' :| cvs [server aborted]: read lock failed - giving up : :Is this a temporary problem or did a miss any planned changes? AFAIK I have to re-checkout everything. ServusManfred
pgsql-hackers by date: