Re: Performance/Issues with CMP and JBoss - Mailing list pgsql-general

From Richard Huxton
Subject Re: Performance/Issues with CMP and JBoss
Date
Msg-id 44D0BEB5.8010609@archonet.com
Whole thread Raw
In response to Performance/Issues with CMP and JBoss  ("Andy Dale" <andy.dale@gmail.com>)
List pgsql-general
Andy Dale wrote:
> Hi,
>
> I will explain in more details what the test (extremely simple) program is
> actually doing.  A session bean receives some data (roughly 3K) and then
> persists this data to the database with EntityManager.persist() (using the
> EJB 3.0 Persistence API that comes with JBoss).  Once the persist method
> returns a message is sent to a message driven bean where it tries to get
> the
> previously persisted data from the database using the EntityManager.find()
> method, this is where we run into problems with Postgres, it only seems to
> find the persisted object 5% of the time.  I don't know if you class it
> as a
> performance problem or an error but seems as it works in other databases i
> am more inclined to classify this as an error.

Well that's easy to decide.
1. Did the transaction that stored the object complete successfully?
2. Should the results of that transaction be visible from the reading
transaction?

If Yes & Yes, but you can't find the object there's an error.

> When i say works really well, i mean it in the sense that a simple query
> that the EntityManager produces for the find operation is something as
> simple as "select <columns> from table where <primary key> = ?", this is
> causing problems for Postgres, the query is in the form of a prepared
> statement so could this be causing any problems ?

Prepared queries mean you can't do certain optimisations, but for a
single-table fetch on columns with a unique index I'd expect an index to
be used (assuming analyse has been run recently).

> As far as hardware goes my testing machine is:
>
> P4 3.0 GHz
> 1GB RAM
> 20GB of HD (IDE)
>
> But we intend to use a HP Prolient server with the following spec:
>
> Intel Xeon 3.3 GHz
> 2 GB RAM
> 146GB Ultra SCSI 320
>
> The Operating System being run on both is Fedora Core 5
>
> The server/machine also needs to run other programs and processes so we
> don't want the database to hog to much of the resources, about 10 - 20 %
> RAM
> (and how to configure it) and CPU, the current config as defined in the
> postgres.conf file is as so:

Squeezing PostgreSQL and cache-space for its data into 256MB is going to
depend on how large your DB is. Oh, and if you have a lot of updates
then disk will probably be the limiting factor.

> # -----------------------------
> # PostgreSQL configuration file
> # -----------------------------
> #
> # This file consists of lines of the form:
> #

We don't really need to see all the comment lines

> #---------------------------------------------------------------------------
>
> # RESOURCE USAGE (except WAL)
> #---------------------------------------------------------------------------
>
>
> # - Memory -
>
> shared_buffers = 1000            # min 16 or max_connections*2, 8KB each
> #temp_buffers = 1000            # min 100, 8KB each
> #max_prepared_transactions = 5        # can be 0 or more
> # note: increasing max_prepared_transactions costs ~600 bytes of shared
> memory
> # per transaction slot, plus lock space (see max_locks_per_transaction).
> #work_mem = 1024            # min 64, size in KB
> #maintenance_work_mem = 16384        # min 1024, size in KB
> #max_stack_depth = 2048            # min 100, size in KB

OK, so you haven't actually done any configuration. PG *will* crawl with
the default settings, it's setup so you can install it on your 5-year
old laptop without keeling over. I recommend you read the short article
at the following URL and start from there.
   http://www.powerpostgresql.com/PerfList

Oh, and if you don't know what vacuum, analyse and the autovacuum tool
are you'll want to read the relevant parts of the manual.

--
   Richard Huxton
   Archonet Ltd

pgsql-general by date:

Previous
From: Vivek Khera
Date:
Subject: Re: LISTEN considered dangerous
Next
From: "Wenjian Yang"
Date:
Subject: best ways to handle large matrix data