Let me guess, your update script updates a significant portion of the
table right?
One of the drawbacks of PostgreSQL's MVCC architecture is that when you
update rows the old version of the row doesn't disappear. It simply
gets marked as invalid (or expired or whatever terminology the
PostgreSQL hackers use). Tables that have a lot of updates and deletes
invariably end up with a lot of extra cruft slowing them down.
Vacuuming the table removes these dead rows, and PostgreSQL becomes fast
once again.
In fact, in my opinion the one truly wonderful feature in the new,
soon-to-be-realeased 7.2 version of PostgreSQL is the new vacuum code
that allows you to remove some of these dead tuples without slamming an
exclusive lock on the table. Try vacuuming the table between updates
and see if that helps.
Jason
On Tue, 2002-01-29 at 17:52, Cheirie Shum wrote:
> Hi, I've written a little tcl program to test how many seconds it takes to
> run an 'update' transaction and found that the running time increases by a
> couple of seconds every time i run my script.
>
> My script does the following:
> -connect to postgres
> -updates a row and then updates it again with some other value
> -iterate the update step above 200 times
> -print the number of seconds that iteration took
> -disconnect from postgres
>
> Results: So the first time, i get about 2 seconds, then 4, 6, 8... etc.
>
> So, each time that row gets updated, the longer the next update takes...
> What is happening? What do i need to do so that postgres doesn't do this?
> ... What do i need to do so that postgres always updates a row 200 times
> in 2 seconds?
>
> Any help would be appreciated.
> Please reply to: cshum@cats.ucsc.edu
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/users-lounge/docs/faq.html