SLOOOOOOOW - Mailing list pgsql-general

From Jürgen Rose
Subject SLOOOOOOOW
Date
Msg-id 431E0597.4080803@gmx.de
Whole thread Raw
Responses Re: SLOOOOOOOW
Re: SLOOOOOOOW
Re: SLOOOOOOOW
List pgsql-general
Sorry, but I better use this email address, I just hate to use Outlook
for this stuff.

To Peter Eisentraut

Yes, I've read the chapter in the manual.

To Michael Glaesemann

locally I run the database on my laptop (Dell D800) 1 GB Ram, but there
within VMWARE with 512MB assigned RAM. But the target platform is a dual
processor machine with 2 GB.

But, and thats the big but here, I don't care. For me a database has to
work satisfying in the first place without twiddeling some obscure knobs
or push levers to get just accaptable performance if I only have a small
set of data. Heck, I'm talking about maybe in the whole 45.000
records!!! I mean I used Interbase, MySQL, SQLite, SQLServer before, and
for this project postgres was set, so I had to use  it. Which is fine, I
wouldn't mind, if I would not have such troubles.

Which I'm working on is just a redesign of a database which has some
hysterically grown tables. Not much in it, but there are some tables
which should be merged together and some others have to split up. No big
deal. So my basic idea was to use the flexibility of rules to provide a
transparent interface to the frontend, which has the big advantage of
not having to change the frontend in most places at all. We have a bit
of a homegrown framework (PHP) to show and manipulate the data on the
frontend side. Unfortunately it is only easy if you access 1 table, and
don't have to update several tables. So my idea was to use the rule
system as well to put the data into the database and distribute it on
the underlying tables. To have views which separate the physical model
from the logical model. This is best practice isn't it? Unfortunately it
seems no way to create triggers on views, which is what I need. Some
insert rules are not enough, because I'm using data which is just
created, so this is not an option. Ok as a workaround I create a table
which is just there to have a insert trigger on it to distribute the
data on the tables. For selecting, updating, and deleting the rules are
sufficient.

So I actually merging some tables with appr. 8000 + 14,000 + 30,000
records in it, so we talking about a small database. The performance of
selecting data from the views is slow, I mean there are only around
50000 records in there in the whole. It can take up to several seconds
to get the data from the views, which is just not fast enough. The
update is even slower, for just updating 1 record it takes ages.

The actual migration process, of moving the old data to the new tables
is just agonizing slow. To move tha data from the small table (8000
entries) it takes somewhere (not deterministic) between a few minutes
and 40 minutes to move it. Essentually it is just a select from one
table to the compatibility view of the new table. For me it seems that
each additional row makes the database slower. It occured to me that
either table (8000 or 14000 entries) is faster migrated if it happens to
be the first of both. Then migrating the 30000 entries (and it has to be
the last one) takes **hours**!!!

The migration of the tables itself consists of two parts, first move the
data from the table, than update all the linked tables (I had to remove
the joins, they have to point afterwards to the new tables), at this
point I'm using a lot of subselects (which are slow but there is no
other way).

And I actually vacuum and analyze the database after each step, all
usefull indices are set and also used.

I did some serious stuff with SQLServer and Interbase, and I had
**never** those performance problems.

enough of ranting, but I'm totally frustrated
with best regards
Jürgen

pgsql-general by date:

Previous
From: Ben
Date:
Subject: Basic locking question
Next
From: "Guy Rouillier"
Date:
Subject: Re: How to write jobs in postgresql