Thread: Re: seperate swap drive, was Re: [ADMIN] Speed problem

Re: seperate swap drive, was Re: [ADMIN] Speed problem

From
"Kaj-Michael Lang"
Date:
>> Small question, doesn't linux only use around 128 Megs tops for swap
>> space? I thought I read this in a HOWTO somewhere.
>
> If it does, I'll have to add that to my list of reasons why Linux
>is a bad operating system :)


That was the limit of ONE swap partition/file. You can have many swap
partitions/files.
And if I remeber correct this is fixed in the 2.1.X version. (I might be
wrong..)

/--------------------------------------------------------------------\
| Kaj-Michael Lang        | WWW:    http://www.tal.org               |
| Kaskentie 5 C9          | E-Mail: milang@tal.org                   |
| 20720 Turku             |         milang@mobitdata.fi              |
| FINLAND                 |         milang@infa.abo.fi               |
|-------------------------|         klang@abo.fi                     |
| GSM: +358-(0)40-5271058 | FTP:    ftp://ftp.tal.org                |
|--------------------------------------------------------------------|
|                               CS@ÅA                                |
| Software is like sex; it's better when it's free. - Linus Torvalds |
\--------------------------------------------------------------------/




Re: seperate swap drive, was Re: [ADMIN] Speed problem

From
"David Ben-Yaacov"
Date:
We have had almost the exact same problem as Mr. Mackintosh.  We have
three Perl ingest processes running continuously, inserting and
deleting data into the database.  Although not consistent, sometimes
the database slows down to a crawl and the CPU usage increases to
just under 100%.  The Perl ingest processes insert/delete to/from the
same tables.

To try to stop this behavior, I wrote a script that vacuums
the database tables once every hour.  The vacuum script runs at the
same time as the Perl ingest processes do (the Perl ingest processes never
get shut down).  This may or may not have helped the situation.  I believe
it does help.

When I do notice the problem, I generally shut down the ingest, and then
try to vacuum the database manually.  I usually find that the indexes that I
set up on some of the large tables do not correlate to the actual table
data.
In the past, I have deleted the index and then recreated the index.  In
extreme
cases, I have deleted the actual UNIX file that corresponds to the table,
and
then deleted the table reference and then recreated the table.

Even under normal operations when the database is fast, we still have
problems
inserting data.  We have examined the insert query identified in our error
log
to see if there is a problem with our code.  Apparently not, as the
insert query that failed while running the ingest process has no problems
when
entered manually.  Anyone have any thoughts??

You may be wondering what kind of data we store.  The system is a
development
system utilizing a real-time alphanumeric weather feed.  So if we drop the
tables and thus the weather, we can always wait a couple of hours for it to
reload.  On our production/deployed systems, this is not an option.

As to the wise suggestions of Mr. Fournier and others, we have adequate RAM,
256
MBytes, adequate CPU, 2 R10000s, adequate swap, 260 Mbytes, and the postgres
user located on a separate disk from the swap file.  (SGI IRIX 6.5 Dual
Processor
Octane, Postgres 6.3 built using SGI's C compiler, Pg)  (We tried building
6.3.2
using GNU's C compiler and SGI's C compiler but the problem appeared much
worse;
obviously we have not yet tried 6.4).

One difference from Mr. Mackintosh, we do not have trouble running out of
memory.

As to running with the fsync disabled, we have tried both ways but usually
disable fsync.   The reason is perhaps humourous; I sit by the machine and
it makes
a lot of drive noise if we do not disable the fsync.  Could that be a
potential
cause of our problem?

Finally, if we lose the database connection when running our Perl ingest
routines,
we automatically try to reconnect to the database (as per Mr. Fournier's
advice
below).

My question is this:  Has anyone got some advice for Mr. Mackintosh or
myself?
Has anyone experienced this and found a solution?

Thank you in advance.
-----Original Message-----
From: The Hermit Hacker <scrappy@hub.org>
To: Terry Mackintosh <terry@terrym.com>
Cc: pgsql-admin@postgreSQL.org <pgsql-admin@postgreSQL.org>; Tony Reina
<tony@nsi.edu>
Date: Wednesday, November 04, 1998 9:24 PM
Subject: Re: seperate swap drive, was Re: [ADMIN] Speed problem


>On Wed, 4 Nov 1998, Terry Mackintosh wrote:
>
>> > I have a rather large database as well (> 2 Meg of tuples). I thought
>> > my system was souped up enough: PII/400 MHz (100 MHz bus) 256 Meg
SDRAM,
>> > 18 Gig SCSI harddrive, Red Hat Linux 5.1. However, my swap space (512
>> > Meg) is on the same harddrive as the database (albeit on a separate
>> > partition). It sounds like you are saying that this is a no-no.
>>
>> Just that under heavy loads it may degrade performance as you yourself
>> mention.
>>
>> > The database runs quite fast except with processes involving repetitive
>> > inserts or updates. With each successive update in a continuous
process,
>> > the speed drops (almost like an exponentially decreasing process).
Plus,
>> > when this happens, I can't really use the computer that runs the
>> > database because it is soooooo slow. When I run top, the computer is
>> > using all 256 Meg of memory and going about 30-40 meg into swap space.
>> > >From what you've suggested, this 30-40 meg of swap is also competing
>> > with the database trying to write to the harddrive (since they are
using
>> > the same head).
>>
>> This is the type of performance degradation I was referring to.
>>
>> > If I put in a second drive exclusively for the swap space, could this
>> > increase my speed? Or, would it be better to invest in more RAM so that
>> > the job wouldn't need to use any swap space at all?
>>
>> Why not both? :-)
>
>
> Are you running with fsync() disabled?
>
> What version of PostgreSQL are you running?  v6.4 has several
>memory leak fixes in it, which may or may not help...on long term
>connections, memory leak *may* be attributing to your problem.  If you run
>top while doing the 'update/inserts', does the process size just continue
>to rise?
>
> Something else to try...close and reconnect your insert/update
>process(es).  Not a long term solution, just curious if that shows an
>overall speed improvement.  Similar to the 'memory leak' problem, at least
>this will let go of the process, clean out the memory, and start over
>again....
>
>Marc G. Fournier
>Systems Administrator @ hub.org
>primary: scrappy@hub.org           secondary:
scrappy@{freebsd|postgresql}.org
>
>


Re: seperate swap drive, was Re: [ADMIN] Speed problem

From
The Hermit Hacker
Date:
On Thu, 5 Nov 1998, David Ben-Yaacov wrote:

> Even under normal operations when the database is fast, we still have
> problems inserting data.  We have examined the insert query identified
> in our error log to see if there is a problem with our code.
> Apparently not, as the insert query that failed while running the
> ingest process has no problems when entered manually.  Anyone have any
> thoughts??

    Shared memory corruption?

> You may be wondering what kind of data we store.  The system is a
> development system utilizing a real-time alphanumeric weather feed.
> So if we drop the tables and thus the weather, we can always wait a
> couple of hours for it to reload.  On our production/deployed systems,
> this is not an option.

    Wow...how many records per second are being ingested?

> As to the wise suggestions of Mr. Fournier and others, we have
> adequate RAM, 256 MBytes, adequate CPU, 2 R10000s, adequate swap, 260
> Mbytes, and the postgres user located on a separate disk from the swap
> file.  (SGI IRIX 6.5 Dual Processor Octane, Postgres 6.3 built using
> SGI's C compiler, Pg)  (We tried building 6.3.2 using GNU's C compiler
> and SGI's C compiler but the problem appeared much worse; obviously we
> have not yet tried 6.4).

    How hard is it for you to upgrade to 6.4?  I'd be curious if the
problem does become worse, and, if so, if you have a little time, would
love to see if we can debug the reasons why...sounds like you guys would
make for a great test case,...

> As to running with the fsync disabled, we have tried both ways but
> usually disable fsync.  The reason is perhaps humourous; I sit by the
> machine and it makes a lot of drive noise if we do not disable the
> fsync.  Could that be a potential cause of our problem?

    Shouldn't be...but I won't say impossible...

> Finally, if we lose the database connection when running our Perl
> ingest routines, we automatically try to reconnect to the database (as
> per Mr. Fournier's advice below).


> My question is this:  Has anyone got some advice for Mr. Mackintosh or
> myself? Has anyone experienced this and found a solution?

    First advice, move up to 6.4 ... if you can get it to run there,
even with the same problems, at least you are working with a much newer
set of features and fixes, and we should be able to direct you better in
how to debug it...the same applies to Terry, of course.  Having two with
similar problem reports should be very helpful in debugging this...

    Terry's might be easier to work with because he's running *cough*
*gack* Linux *choke* ... but "the more, the merrier" :)

Marc G. Fournier
Systems Administrator @ hub.org
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org