Re: Faster Updates - Mailing list pgsql-hackers

From Nicolai Petri
Subject Re: Faster Updates
Date
Msg-id 200606032105.03321.nicolai@catpipe.net
Whole thread Raw
In response to Re: Faster Updates  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Faster Updates
List pgsql-hackers
On Saturday 03 June 2006 17:27, Tom Lane wrote:
> PFC <lists@peufeu.com> writes:
> >    [snip - complicated update logic proprosal]
> >     What do you think ?
>
> Sounds enormously complicated and of very doubtful net win --- you're
>
> [snip - ... bad idea reasoning] :)

What if every backend while processing a transaction collected a list of 
touched records - probably with a max number of entries (GUC) collected per 
transaction. Then when transaction completes the list of touples are sent to 
pg_autovacuum or possible a new process that selectively only went for those 
tupples. Of course it should have some kind of logic connected so we don't 
visit the tupples for vacuum unless we are quite sure no running transactions 
would be blocking adding the blocks to the FSM. We might be able to actually 
queue up the blocks until a later time (GUC queue-max-time + 
queue-size-limit) if we cannot determine that it would be safe to FSM the 
blocks at current time.

I guess this has probably been suggested before and there is probably a reason 
why it cannot be done or wouldn't be effective. But it could probably be a 
big win in for common workloads like webpages. Where it would be troublesome 
is systems with long-running transactions - it might as well just be disabled 
there.

Best regards,
Nicolai Petri



pgsql-hackers by date:

Previous
From: Kris Jurka
Date:
Subject: Re: Going for "all green" buildfarm results
Next
From: Greg Stark
Date:
Subject: Re: More thoughts about planner's cost estimates