Re: seperate swap drive, was Re: [ADMIN] Speed problem - Mailing list pgsql-admin

From David Ben-Yaacov
Subject Re: seperate swap drive, was Re: [ADMIN] Speed problem
Date
Msg-id 004801be08cf$bb3cd090$66097cc0@itd.sterling.com
Whole thread Raw
In response to seperate swap drive, was Re: [ADMIN] Speed problem  (Terry Mackintosh <terry@terrym.com>)
Responses Re: seperate swap drive, was Re: [ADMIN] Speed problem
List pgsql-admin
We have had almost the exact same problem as Mr. Mackintosh.  We have
three Perl ingest processes running continuously, inserting and
deleting data into the database.  Although not consistent, sometimes
the database slows down to a crawl and the CPU usage increases to
just under 100%.  The Perl ingest processes insert/delete to/from the
same tables.

To try to stop this behavior, I wrote a script that vacuums
the database tables once every hour.  The vacuum script runs at the
same time as the Perl ingest processes do (the Perl ingest processes never
get shut down).  This may or may not have helped the situation.  I believe
it does help.

When I do notice the problem, I generally shut down the ingest, and then
try to vacuum the database manually.  I usually find that the indexes that I
set up on some of the large tables do not correlate to the actual table
data.
In the past, I have deleted the index and then recreated the index.  In
extreme
cases, I have deleted the actual UNIX file that corresponds to the table,
and
then deleted the table reference and then recreated the table.

Even under normal operations when the database is fast, we still have
problems
inserting data.  We have examined the insert query identified in our error
log
to see if there is a problem with our code.  Apparently not, as the
insert query that failed while running the ingest process has no problems
when
entered manually.  Anyone have any thoughts??

You may be wondering what kind of data we store.  The system is a
development
system utilizing a real-time alphanumeric weather feed.  So if we drop the
tables and thus the weather, we can always wait a couple of hours for it to
reload.  On our production/deployed systems, this is not an option.

As to the wise suggestions of Mr. Fournier and others, we have adequate RAM,
256
MBytes, adequate CPU, 2 R10000s, adequate swap, 260 Mbytes, and the postgres
user located on a separate disk from the swap file.  (SGI IRIX 6.5 Dual
Processor
Octane, Postgres 6.3 built using SGI's C compiler, Pg)  (We tried building
6.3.2
using GNU's C compiler and SGI's C compiler but the problem appeared much
worse;
obviously we have not yet tried 6.4).

One difference from Mr. Mackintosh, we do not have trouble running out of
memory.

As to running with the fsync disabled, we have tried both ways but usually
disable fsync.   The reason is perhaps humourous; I sit by the machine and
it makes
a lot of drive noise if we do not disable the fsync.  Could that be a
potential
cause of our problem?

Finally, if we lose the database connection when running our Perl ingest
routines,
we automatically try to reconnect to the database (as per Mr. Fournier's
advice
below).

My question is this:  Has anyone got some advice for Mr. Mackintosh or
myself?
Has anyone experienced this and found a solution?

Thank you in advance.
-----Original Message-----
From: The Hermit Hacker <scrappy@hub.org>
To: Terry Mackintosh <terry@terrym.com>
Cc: pgsql-admin@postgreSQL.org <pgsql-admin@postgreSQL.org>; Tony Reina
<tony@nsi.edu>
Date: Wednesday, November 04, 1998 9:24 PM
Subject: Re: seperate swap drive, was Re: [ADMIN] Speed problem


>On Wed, 4 Nov 1998, Terry Mackintosh wrote:
>
>> > I have a rather large database as well (> 2 Meg of tuples). I thought
>> > my system was souped up enough: PII/400 MHz (100 MHz bus) 256 Meg
SDRAM,
>> > 18 Gig SCSI harddrive, Red Hat Linux 5.1. However, my swap space (512
>> > Meg) is on the same harddrive as the database (albeit on a separate
>> > partition). It sounds like you are saying that this is a no-no.
>>
>> Just that under heavy loads it may degrade performance as you yourself
>> mention.
>>
>> > The database runs quite fast except with processes involving repetitive
>> > inserts or updates. With each successive update in a continuous
process,
>> > the speed drops (almost like an exponentially decreasing process).
Plus,
>> > when this happens, I can't really use the computer that runs the
>> > database because it is soooooo slow. When I run top, the computer is
>> > using all 256 Meg of memory and going about 30-40 meg into swap space.
>> > >From what you've suggested, this 30-40 meg of swap is also competing
>> > with the database trying to write to the harddrive (since they are
using
>> > the same head).
>>
>> This is the type of performance degradation I was referring to.
>>
>> > If I put in a second drive exclusively for the swap space, could this
>> > increase my speed? Or, would it be better to invest in more RAM so that
>> > the job wouldn't need to use any swap space at all?
>>
>> Why not both? :-)
>
>
> Are you running with fsync() disabled?
>
> What version of PostgreSQL are you running?  v6.4 has several
>memory leak fixes in it, which may or may not help...on long term
>connections, memory leak *may* be attributing to your problem.  If you run
>top while doing the 'update/inserts', does the process size just continue
>to rise?
>
> Something else to try...close and reconnect your insert/update
>process(es).  Not a long term solution, just curious if that shows an
>overall speed improvement.  Similar to the 'memory leak' problem, at least
>this will let go of the process, clean out the memory, and start over
>again....
>
>Marc G. Fournier
>Systems Administrator @ hub.org
>primary: scrappy@hub.org           secondary:
scrappy@{freebsd|postgresql}.org
>
>


pgsql-admin by date:

Previous
From: Mateus Cordeiro Inssa
Date:
Subject: Re: seperate swap drive, was Re: [ADMIN] Speed problem
Next
From: "G. Anthony Reina"
Date:
Subject: Re: seperate swap drive, was Re: [ADMIN] Speed problem