Re: A long-running transaction - Mailing list pgsql-sql

From John Summerfield
Subject Re: A long-running transaction
Date
Msg-id 461CEF66.4010907@herakles.homelinux.org
Whole thread Raw
In response to Re: A long-running transaction  (Andrew Sullivan <ajs@crankycanuck.ca>)
Responses Re: A long-running transaction  (Andrew Sullivan <ajs@crankycanuck.ca>)
List pgsql-sql
Andrew Sullivan wrote:
> On Wed, Apr 11, 2007 at 05:54:45AM +0800, John Summerfield wrote:
>> Linux caches writes, I don't think it should be hitting disk at all. The
> 
> I _sure hope_ you don't care about this data, then.  That's not a
> real safe way to work.  But. . .

As I said in the first place, this is all one transaction. No doubt that
contributes to the problem.

> 
>> table being updated contains records 7482 (658K raw data) of which
>> probably fewer than 2000 are being updated, and typically the same ones
>> all the time: we're updating the date of the latest trade.
> 
> . . . this is likely your problem.  The updates probably get slower
> and slower.  What's happening is that you're expiring a row _for each
> update_, which means it _isn't_ the same row every time.  This is
> approximately the worst use model for PostgreSQL's MVCC approach. 
> Worse, though it's not the same row, you have to grovel through all
> the dead rows to find the actually live one.  So that's probably
> what's killing you.

I don't understand what you're telling me. Your surmise correctly
reflects my experience, but I don't understand the explanation.

Each row's identified by a key, and the key doesn't change.

I don't know how Postgresql works, but 25 or so years ago I had a good
understanding of how ADABAS* (now MAXDB I believe, but doubtless
rewritten several times) worked than. And the OS then was IBM's OS/VS or
a close relative from Fujitsu, OSIV/F4, and those worked very
differently from how Linux works today.

ADABAS would put the updated record right back where it came from, it
maintained some free space in each block in case an update increased the
size of a record. Whether Postgresql does that, or something different I
don't know; the Adabas technique would handle this process well.

I'll point out this line again, in case your explanation means you think
postgresql is spending lots of CPU:
user  :   1d  3:17:04.03  16.0%  page in :  131097310  disk 1:
3079516r20087664w
nice  :       0:05:39.64   0.1%  page out:  197016649
system:   2d 20:38:37.13  40.1%  page act:   87906251

That line above. The way I figure it the kernel's doing an enormous
amount of work handling its buffers. System time is overhead, it's like
a government tax - you gotta pay it, but you try really hard not to pay
too much.

user time is time doing real work (perhaps not when it's in postgresql,
but in general), and one does not expect to see it dwarfed by the system
time. Here, we have the kernel-space code spending twice as much time
doing something as the user-space code.

It's not normal Linux kernel behaviour, and it's not normal on my
laptop. The best idea I have is that postgresql is doing something that
causes this bad behaviour. If you think that updating records as I do
causes this, I won't argue the point because I simply do no know.

I don't need a workaround, I have two: one is to commit the updates when
the date field in the input records changes, and the other is to create
the input data in separate files, one for each date. Both give
reasonable performance, the first took around three and a half hours
(different system, currently declines to boot:-((, it's a Pentium IV 3.0
Ghz with HT), the other ran a little longer on the laptop. Roughly what
I expected.


Since all the work is within one transaction, there's no real need for
postgresql to write data at all, unless it's running out of memory.


* ADABAS then didn't do SQL, and wasn't relational. It was about when
IBM's DB2 came out, and Cicom had Supra. According to Cincom, DB2 wasn't
relational either.
> 
>> Laptop (1.25 Gbytes)
>> shared_buffers = 1000                   # min 16 or max_connections*2,
>> 8KB each
> 
> so you have 8000 K configured as your shared buffers there.  That's
> as much as you'll ever use for shared memory by Postgres.  You can
> probably bump a little in this case.  Your other config seems ok to
> me.  But I don't think this is your problem -- the update pattern is.  

In both cases, I am using default values. Unless someone thinks there's
clearly something to be gained by changing them, I won't.

> 
> A
> 




pgsql-sql by date:

Previous
From: Andrew Sullivan
Date:
Subject: Re: A long-running transaction
Next
From: Andrew Sullivan
Date:
Subject: Re: A long-running transaction