Thread: AW: Why vacuum?
> > > The tendency here seems to be towards an improved smgr. > > > But, it is currently extremely cheap to calculate where a new row > > > needs to be located physically. This task is *a lot* more expensive > > > in an overwrite smgr. > > I don't agree. If (as I have proposed) the search is made in the > background by a low priority process, you just have to lookup a cache > entry to find out where to write. If the priority is too low you will end up with the same behavior as current, because the cache will be emptied by high priority multiple new rows, thus writing to the end anyways. Conclusio: In those cases where overwrite would be most advantageous (high volume modified table) your system won't work, unless you resort to my concern and make it *very* expensive (=high priority). Andreas
Zeugswetter Andreas SB wrote: > > If the priority is too low you will end up with the same behavior as current, Yes, and it is the intended behaviour. I'd use idle priority for it. > because the cache will be emptied by high priority multiple new rows, > thus writing to the end anyways. Yes, but this only happens when you don't have enought spare idle CPU time. If you are in such situation for long periods, there's nothing you can do, you already have problems. My approach in winning here because it allows you to have bursts of CPU utilization without being affected by the overhead of a overwriting smgr that (without hacks) will always try to find available slots, even in high load situations. > Conclusio: In those cases where overwrite would be most advantageous (high > volume modified table) your system won't work Why ? I have plenty of CPU time available on my server, even if one of my table is highly volatile, fast-changing. Bye!
* Daniele Orlandi <daniele@orlandi.com> [001214 09:10] wrote: > Zeugswetter Andreas SB wrote: > > > > If the priority is too low you will end up with the same behavior as current, > > Yes, and it is the intended behaviour. I'd use idle priority for it. If you're talking about vacuum, you really don't want to do this, what's going to happen is that since you have an exclusive lock on the file during your vacuum and no way to do priority lending you can deadlock. > > because the cache will be emptied by high priority multiple new rows, > > thus writing to the end anyways. > > Yes, but this only happens when you don't have enought spare idle CPU > time. If you are in such situation for long periods, there's nothing you > can do, you already have problems. > > My approach in winning here because it allows you to have bursts of CPU > utilization without being affected by the overhead of a overwriting smgr > that (without hacks) will always try to find available slots, even in > high load situations. > > > Conclusio: In those cases where overwrite would be most advantageous (high > > volume modified table) your system won't work > > Why ? I have plenty of CPU time available on my server, even if one of > my table is highly volatile, fast-changing. When your table grows to be very large you'll see what we're talking about. -- -Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org] "I have the heart of a child; I keep it in a jar on my desk."
Alfred Perlstein wrote: > > If you're talking about vacuum, you really don't want to do this, No, I'm not talking about vacuum as it is intended now, it's only a process that scans tables to find available blocks/tuples. It is virtually optional, if it doesn't run, the database will behave just like now. > what's going to happen is that since you have an exclusive lock on > the file during your vacuum and no way to do priority lending you > can deadlock. No exclusive lock, it's just a reader. > When your table grows to be very large you'll see what we're talking > about. I see this as an optimization issue. If the scanner isn't smart and loses time scanning areas of the table that have not been emptied, you go back to the current behaviour. Bye!