Thread: PostgreSQL not fully utilizing system resources?

PostgreSQL not fully utilizing system resources?

From
"Gregory Stewart"
Date:
Hello List,

We've been running PostgreSQL as our web application database for
almost a year and it has noticeably slowed down over last few months.

Our current setup and pgsql configuration looks like this:

8.1.2 on Ubuntu 4 on Opteron Dual Core with 2 GBytes RAM. This is a
dedicated DB server.

We currently have about 3.5 million rows in 91 tables. Besides the
requests coming from the web server, we have batch processes running
every 15 minutes from another internal machine that do a lot of
UPDATE, DELETE and INSERT queries on thousands of rows.

Many of the SELECT queries coming from the web server contain large
JOINS and aggregate calculations.

We are running a financial application which is very data intensive
and calculates a lot on the SQL side.

Anyways, watching the system processes we realized that PostgreSQL is
only using about 300 Mbytes for itself. Also, both cores are usually
maxed out to 100% usage.

Are we expecting too much from our server?

Our non-default configuration settings are:

max_connections = 100
shared_buffers = 17500
work_mem = 2048
maintenance_work_mem = 40000
max_fsm_pages = 35000
autovacuum = on

What can I do to make best use of my db server? Is our configuration
flawed? Or are we already at a point where we need consider clustering
/ load balancing?

Any ideas and suggestions are welcome.

Regards,
Gregory Stewart

Re: PostgreSQL not fully utilizing system resources?

From
Bill Moran
Date:
"Gregory Stewart" <gstewart512@gmail.com> wrote:
>
> Hello List,
>
> We've been running PostgreSQL as our web application database for
> almost a year and it has noticeably slowed down over last few months.

Just going to go through your email and address each point inline.

First off, you say nothing of your vacuum/analyze schedule other than
to point out that autovacuum is on.  If you run "vacuum verbose" on the
database, what does the output say?

> Our current setup and pgsql configuration looks like this:
>
> 8.1.2 on Ubuntu 4 on Opteron Dual Core with 2 GBytes RAM. This is a
> dedicated DB server.

Upgrade.  8.1.2 is old, you should be running 8.1.9 unless you have a
specific reason not to.

> We currently have about 3.5 million rows in 91 tables.

How large is the dataset?  What does pg_database_size tell you?  3.5M
could be a lot or a little, depending on the size of each row.

> Besides the
> requests coming from the web server, we have batch processes running
> every 15 minutes from another internal machine that do a lot of
> UPDATE, DELETE and INSERT queries on thousands of rows.

Hence my concern that your autovacuum settings may not be aggressive
enough.

> Many of the SELECT queries coming from the web server contain large
> JOINS and aggregate calculations.
>
> We are running a financial application which is very data intensive
> and calculates a lot on the SQL side.
>
> Anyways, watching the system processes we realized that PostgreSQL is
> only using about 300 Mbytes for itself.

That's because you told it to.  Below, you allocated 143M of RAM to
shared buffers.  Current thinking is to allocate 1/3 of your RAM to
shared buffers and start fine-tuning from there.  If you haven't
already determined that less is better for your workload, I'd consider
bumping shared_buffers up to ~70000.

> Also, both cores are usually
> maxed out to 100% usage.

Maxed out on CPU usage?  What's your IO look like?

> Are we expecting too much from our server?

Hard to say without more details.

> Our non-default configuration settings are:
>
> max_connections = 100
> shared_buffers = 17500
> work_mem = 2048

While I can't be sure without more details, you may benefit by
raising the work_mem value.  If you've got 2G of RAM, and you
allocate 600M to shared_buffers, that leaves 1.4G for work_mem.
Depending on whether or not the large joins you describe need
it or not, you may benefit from increasing work_mem.

Your description gives the impression that most of the RAM on
this system is completely free.  If that's the case, you may be
constraining PG without need, but there's not enough information in
your post to be sure.

> maintenance_work_mem = 40000
> max_fsm_pages = 35000
> autovacuum = on
>
> What can I do to make best use of my db server? Is our configuration
> flawed? Or are we already at a point where we need consider clustering
> / load balancing?

It's a tough call.  Explain of some problematic queries would be
helpful.  It is entirely possible that you're doing some intensive
math and you're simply going to need more CPU horsepower to get it
done any faster, but there's just not enough information in your
post to know for sure.

Post some explains of some problem queries.  Let us know more about
your IO load.  Give us some snapshots of top under load.  Find out
how large the database is.  Provide the output of vacuum verbose.

--
Bill Moran
Collaborative Fusion Inc.

wmoran@collaborativefusion.com
Phone: 412-422-3463x4023


Re: PostgreSQL not fully utilizing system resources?

From
Greg Smith
Date:
On Fri, 1 Jun 2007, Gregory Stewart wrote:

> Is our configuration flawed?

For sure.  The bad news is that you have a good chunk of work to do; the
good news is that you should quickly see a dramatic improvement as that
progresses.

> Anyways, watching the system processes we realized that PostgreSQL is
> only using about 300 Mbytes for itself. Also, both cores are usually
> maxed out to 100% usage.  Are we expecting too much from our server?

Your server isn't even running close to its capacity right now.  Start by
following the instructions at
http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm to tune
your system so it actually is using much more of your memory.  When you
run a manual VACUUM ANALYZE as it recommends, you'll probably discover you
have to increase max_fsm_pages.  The follow-up references at the bottom of
that page will lead you to several tuning guides that will go into more
detail about other things you might do.

The first obvious thing is that your extremely low work_mem setting is
likely throttling all your joins; read
http://www.postgresql.org/docs/8.1/interactive/runtime-config-resource.html
to understand how that setting works, then test some of your queries after
increasing it and see how things improve (note that you have to be careful
making comparisons here because if you run exactly the same query twice,
the second time will usually be better because the data is cached).

Next, if your settings for checkpoint_settings is at the default, that
would be a killer with your workload as well.

That should get you started.  If you still aren't happy with performance
after all that, post again with some details about your disk configuration
and an EXPLAIN plan for something that's moving slowly.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD