Re: Performance query about large tables, lots of concurrent access - Mailing list pgsql-performance

From Karl Wright
Subject Re: Performance query about large tables, lots of concurrent access
Date
Msg-id 4677D298.2010109@metacarta.com
Whole thread Raw
In response to Re: Performance query about large tables, lots of concurrent access  (Karl Wright <kwright@metacarta.com>)
Responses Re: Performance query about large tables, lots of concurrent access
Re: Performance query about large tables, lots of concurrent access
List pgsql-performance
An overnight VACUUM helped things quite a bit.  I am now getting
throughput of around 75 transactions per minute, where before I was
getting 30.  Also, the CPU is no longer pegged, and the machines load
average has dropped to an acceptable 6-10 from somewhere above 20.

While this is still pretty far off the best performance I saw (when the
tables were smaller), it's reasonably consistent with O(log(n))
performance at least.

This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query skyrockets.  Is this essentially
correct?

Karl

Karl Wright wrote:
> Tom Lane wrote:
>> Karl Wright <kwright@metacarta.com> writes:
>>> - At any given time, there are up to 100 of these operations going on
>>> at once against the same database.
>>
>> It sounds like your hardware is far past "maxed out".  Which is odd
>> since tables with a million or so rows are pretty small for modern
>> hardware.  What's the CPU and disk hardware here, exactly?  What do you
>> see when watching vmstat or iostat (as appropriate for OS, which you
>> didn't mention either)?
>>
>>             regards, tom lane
>>
>
> Yes, I was surprised as well, which is why I decided to post.
>
> The hardware is a Dell 2950, two processor, dual-core each processor, 16
> GB memory, with a RAID disk controller.  The operating system is Debian
> Linux (sarge plus mods, currently using the Postgresql 8.1 backport).
>
> Also, as I said before, I have done extensive query analysis and found
> that the plans for the queries that are taking a long time are in fact
> very reasonable.  Here's an example from the application log of a query
> that took way more time than its plan would seem to indicate it should:
>
>  >>>>>>
> [2007-06-18 09:39:49,783]ERROR Found a query that took more than a
> minute: [UPDATE intrinsiclink SET isnew=? WHERE ((jobid=? AND
> childidhash=? AND childid=?)) AND (isnew=? OR isnew=?)]
> [2007-06-18 09:39:49,783]ERROR   Parameter 0: 'B'
> [2007-06-18 09:39:49,783]ERROR   Parameter 1: '1181766706097'
> [2007-06-18 09:39:49,783]ERROR   Parameter 2:
> '7E130F3B688687757187F1638D8776ECEF3009E0'
> [2007-06-18 09:39:49,783]ERROR   Parameter 3:
> 'http://norwich.openguides.org/?action=index;index_type=category;index_value=Cafe;format=atom'
>
> [2007-06-18 09:39:49,783]ERROR   Parameter 4: 'E'
> [2007-06-18 09:39:49,783]ERROR   Parameter 5: 'N'
> [2007-06-18 09:39:49,797]ERROR  Plan: Index Scan using i1181764142395 on
> intrinsiclink  (cost=0.00..14177.29 rows=5 width=253)
> [2007-06-18 09:39:49,797]ERROR  Plan:   Index Cond: ((jobid = $2) AND
> ((childidhash)::text = ($3)::text))
> [2007-06-18 09:39:49,797]ERROR  Plan:   Filter: ((childid = ($4)::text)
> AND ((isnew = ($5)::bpchar) OR (isnew = ($6)::bpchar)))
> [2007-06-18 09:39:49,797]ERROR
> <<<<<<
> (The intrinsiclink table above is the "child table" I was referring to
> earlier, with 13,000,000 rows at the moment.)
>
> Overnight I shut things down and ran a VACUUM operation to see if that
> might help.  I'll post again when I find out if indeed that changed any
> performance numbers.  If not, I'll be able to post vmstat output at that
> time.
>
> Karl
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>       subscribe-nomail command to majordomo@postgresql.org so that your
>       message can get through to the mailing list cleanly
>


pgsql-performance by date:

Previous
From: Jim Nasby
Date:
Subject: Re: dbt2 NOTPM numbers
Next
From: Ray Stell
Date:
Subject: Re: Volunteer to build a configuration tool