Thread: FW: Tx forecast improving harware capabilities.

FW: Tx forecast improving harware capabilities.

From
"Sebastian Lallana"
Date:

Hello:

We are having serious performance problems using JBOSS and PGSQL.

I’m sure the problem has to do with the application itself (and neither with JBOSS nor PGSQL) but the fact is that we are using desktop equipment to run both Jboss and Postgresql (An Athlon 2600, 1 Gb Ram, IDE HDD with 60 Mb/sec Transfer Rate), and the answers arise:

If we upgrade our hardware to a Dual Processor would the transactions per second increase significantly? Would Postgresql take advantage from SMP? Presumably yes, but can we do a forecast about the number of tps? What we need is a paper with some figures showing the expected performance in different environments. Some study about the “degree of correlation” between TPS and Number of Processors, Cache, Frequency, Word Size, Architecture, etc.

It exists something like this? Does anybody has experience about this subject?

 

Thanks in advance and best regards.

 

P.S. I’ve been looking at www.tpc.org but I could’t find anything valuable. 

 

Re: FW: Tx forecast improving harware capabilities.

From
Josh Berkus
Date:
Sebastian,

> We are having serious performance problems using JBOSS and PGSQL.

How about some information about your application?   Performance tuning
approaches vary widely according to what you're doing with the database.

Also, read this:
http://www.powerpostgresql.com/PerfList

> I'm sure the problem has to do with the application itself (and neither
> with JBOSS nor PGSQL) but the fact is that we are using desktop
> equipment to run both Jboss and Postgresql (An Athlon 2600, 1 Gb Ram,
> IDE HDD with 60 Mb/sec Transfer Rate), and the answers arise:

Well, first off, the IDE HDD is probably killing performance unless your
application is 95% read or greater.

> If we upgrade our hardware to a Dual Processor would the transactions
> per second increase significantly? Would Postgresql take advantage from
> SMP? Presumably yes, but can we do a forecast about the number of tps?

If this is an OLTP application, chance are that nothing is going to improve
performance until you get decent disk support.

> What we need is a paper with some figures showing the expected
> performance in different environments. Some study about the "degree of
> correlation" between TPS and Number of Processors, Cache, Frequency,
> Word Size, Architecture, etc.

I don't think such a thing exists even for Oracle.   Hardware configuration
for maximum performance is almost entirely dependant on your application.

If it helps, running DBT2 (an OLTP test devised by OSDL after TPC-C), I can
easily get 1700 new orders per minute (NOTPM) (about 3000 total
multiple-write transactions per minute) on a quad-pentium-III with 4GB RAM
and 14 drives, and 6500 notpm on a dual-Itanium machine.

> P.S. I've been looking at www.tpc.org but I could't find anything
> valuable.

Nor would you for any real-world situation even if we had a TPC benchmark
(which are involved and expensive, give us a couple of years).  The TPC
benchmarks are more of a litmus test that your database system & platform are
"competitive"; they don't really relate to real-world performance (unless you
have budget for an 112-disk system!)

--
Josh Berkus
Aglio Database Solutions
San Francisco

Re: FW: Tx forecast improving harware capabilities.

From
David Hodgkinson
Date:
On 18 Aug 2005, at 16:01, Sebastian Lallana wrote:


> It exists something like this? Does anybody has experience about
> this subject?

I've just been through this with a client with both a badly tuned Pg and
an application being less than optimal.

First, find a benchmark. Just something you can hold on to. For us, it
was the generation time of the site's home page. In this case, 7
seconds.
We looked hard at postgresql.conf, planned the memory usage, sort_memory
and all that. That was a boost. Then we looked at the queries that were
being thrown at the database. Over 200 to build one page! So, a layer
of caching was built into the web server layer. Finally, some frequently
occurring combinations of queries were pushed down into stored procs.
We got the page gen time down to 1.5 seconds AND the server being stable
under extreme stress. So, a fair win.

Thanks to cms for several clues.

So, without understanding your application and were it's taking the
time,
you can't begin to estimate hardware usage.