Thread: Re: [SQL] Slow Inserts Again

Re: [SQL] Slow Inserts Again

From
"Frank Morton"
Date:
>But the transaction boundaries wouldn't have anything to do with
>Frank's real problem, which is that the insertions are getting
>slower and slower.  There's no good reason for that; and other
>people are not reporting any comparable problems.  (Considering
>that we *have* been getting trouble reports for more-than-2-gig
>tables, it's clear that people are putting large amounts of data
>into 6.5; so it's not like Frank is stressing the system more
>than it has been before.)

I should have made it more clear. I am using 6.4.2 with the "limit"
addition.

>Frank, what does the memory usage of the backend that's processing
>this insertion look like; has it been growing steadily?  I'm wondering
>whether you could have a problem with poor malloc behavior, or some
>such.

Using "ps" there is only on process accumlating lots of time. One backend
process has accumulated 2036 minutes of cpu time. Using vmstat, it
looks like things are ok memory wise. Almost nothing swapped and
a few megabytes marked as free. The CPU is 0% idle. The whole system
is now starting to melt down, though. Even moving the mouse takes
a few seconds for it to move. But, looking around, I don't see any other
process doing anything really.

I'm up to 13 seconds now per insert.

I'm about to punt again and go back to figuring out how to make copy
work. At this point, I'd have to convert the insert statements to copy.

Thanks all for helping figure this out...

Frank






Re: [SQL] Slow Inserts Again

From
Tom Lane
Date:
"Frank Morton" <fmorton@base2inc.com> writes:
> I'm about to punt again and go back to figuring out how to make copy
> work. At this point, I'd have to convert the insert statements to copy.

Well, it'd be worth your while to get the quoting conventions straight
so that you could use copy instead of inserts.  But since we don't know
where this slower-and-slower behavior is coming from, I worry that you
will still see it with a copy.  Copy will avoid a lot of SQL-parsing
and query-planning overhead, but it doesn't make any real difference
when it comes down to writing tuples on the disk, AFAIK.

If you stop the process now, and then start a new backend working on
the as-yet-uninserted data, is the new backend equally slow right away?
If it's a memory problem, a fresh backend might be OK for a while.
(This is the root of Herouth's suggestion to reconnect every few
thousand records...)
        regards, tom lane