Re: Final decision - Mailing list pgsql-performance

From John A Meinel
Subject Re: Final decision
Date
Msg-id 426FC9E5.60905@arbash-meinel.com
Whole thread Raw
In response to Final decision  ("Joel Fradkin" <jfradkin@wazagua.com>)
Responses Re: Final decision
List pgsql-performance
Joel Fradkin wrote:
> I spent a great deal of time over the past week looking seriously at
> Postgres and MYSQL.
>
> Objectively I am not seeing that much of an improvement in speed with
> MYSQL, and we have a huge investment in postgrs.
>
> So I am planning on sticking with postgres fro our production database
> (going live this weekend).

Glad to hear it. Good luck.
>
>
...
>
> Things I still have to make better are my settings in config, I have it
> set to no merge joins and no seq scans.

Just realize, you probably *don't* want to set that in postgresql.conf.
You just want to issue an "SET enable_seqscan TO off" before issuing one
of the queries that are mis-planned.

Because there are lots of times when merge join and seq scan is actually
faster than the alternatives. And since I don't think you tested every
query you are going to run, you probably want to let the planner handle
the ones it gets right. (Usually it doesn't quite a good job.)

Also, I second the notion of getting a confidentiality contract. There
have been several times where someone had a pathological case, and by
sending the data to someone (Tom Lane), they were able to track down and
fix the problem.

>
> I am going to have to use flattened history files for reporting (I saw
> huge difference here the view for audit cube took 10 minutes in explain
> analyze and the flattened file took under one second).
>
>
>
> I understand both of these practices are not desirable, but I am at a
> place where I have to get it live and these are items I could not resolve.

Nothing wrong with a properly updated flattened table. You just need to
be careful to keep it consistent with the rest of the data. (Update
triggers/lazy materialization, etc)

>
> I may try some more time with Commanpromt.com, or seek other
> professional help.
>
>
>
> In stress testing I found Postgres was holding up very well (but my IIS
> servers could not handle much of a load to really push the server).
>
> I have a few desktops acting as IIS servers at the moment and if I
> pushed past 50 consecutive users it pretty much blew the server up.
>
> On inserts that number was like 7 consecutive users and updates was also
> like 7 users.
>
>
>
> I believe that was totally IIS not postgres, but I am curious as to if
> using postgres odbc will put more stress on the IIS side then MSSQL did.
>

What do you mean by "blew up"? I assume you have IIS on a different
machine than the database. Are you saying that the database slowed down
dramatically, or that the machine crashed, or just that the web
interface became unresponsive?

> I did have a question if any folks are using two servers one for
> reporting and one for data entry what system should be the beefier?
>
> I have a 2proc machine I will be using and I can either put Sears off by
> themselves on this machine or split up functionality and have one for
> reporting and one for inserts and updates; so not sure which machine
> would be best for which spot (reminder the more robust is a 4proc with 8
> gigs and 2 proc is 4 gigs, both dells).
>

It probably depends on what queries are being done, and what kind of
times you need. Usually the update machine needs the stronger hardware,
so that it can do the writing.

But it depends if you can wait longer to update data than to query data,
obviously the opposite is true. It all depends on load, and that is
pretty much application defined.

>
>
> Thank you for any ideas in this arena.
>
>
>
> Joel Fradkin
>

John
=:->

Attachment

pgsql-performance by date:

Previous
From: John A Meinel
Date:
Subject: Re: Suggestions for a data-warehouse migration routine
Next
From: "Anjan Dave"
Date:
Subject: Why is this system swapping?