Thread: tuning question

tuning question

From
"Mag Gam"
Date:
Hello All,

Running 8.3.4. My situation is a little unique. I am running on a 1
core with 2GB of memory on Redhat Linux 5.2. My entire installation of
pgsql is about 8GB (compressed) from pgdump.  I have 6 databases The
data is keep growing since I plan to add more field to my database and
it will increase dramatically.

My goal is I don't want to use a lot of memory! My storage is faily
fast, I can do about 250Mb/sec (sustained). I would like to leverage
my I/O instead of memory, eventhough I will suffer performance
problems. Also, is it possible to make the database data logs files
(the binary files) large? I am thinking of making them the size of 1G
each instead of very small files? My FS does better performance for
larger files...

Any ideas?

TIA

Re: tuning question

From
Brad Nicholson
Date:
On Thu, 2008-12-11 at 20:09 -0500, Mag Gam wrote:
> Hello All,
>
> Running 8.3.4. My situation is a little unique. I am running on a 1
> core with 2GB of memory on Redhat Linux 5.2. My entire installation of
> pgsql is about 8GB (compressed) from pgdump.  I have 6 databases The
> data is keep growing since I plan to add more field to my database and
> it will increase dramatically.

The size of the compressed pg_dump is irrelevant to the size of the
database in normal operation.  Not just because of the compression, but
indexes are not dumped, only the CREATE INDEX statement, which could
account in many gigs worth of data that you are not accounting for.  It
also does not account for and dead tuples.

Either look at the size of your database on the filesystem itself, or
run this query to get a look at the database size

SELECT datname, pg_size_pretty(pg_database_size(datname)) from
pg_database;

> My goal is I don't want to use a lot of memory! My storage is faily
> fast, I can do about 250Mb/sec (sustained). I would like to leverage
> my I/O instead of memory, eventhough I will suffer performance
> problems.

Before giving you suggestions, may I ask why?  Memory is cheap these
days, and intentionally limiting it seems like a bad idea - especially
if you are expecting performance.  This really may behave in ways you
don't expect, like saturating your I/O system.

That said, if you really really want to do this, set your shared_buffers
to a low value, bump up random_page_cost, set work_mem and
maintenance_work_mem to lower values.

--
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.


Re: tuning question

From
David Gardner
Date:
Take a look at:
http://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY
and
http://www.postgresql.org/docs/8.3/interactive/kernel-resources.html#SYSVIPC

I am not sure if RedHat provides different default values than what
PostgreSQL normally comes with.
I would also run some typical test queries after changing some of these
values, especially work_mem to make sure your not impacting performance
too heavily.

I'm not sure what else is running on the system, or what type of usage
you are expecting, but 2GB of ram might turn out to be enough for
PostgreSQL.

Also take a look at this:
http://www.revsys.com/writings/postgresql-performance.html

I found it to be useful, although I believe some of the suggestions the
author makes are assuming that you have a large PostgreSQL installation.
He has an interesting note about random_page_cost which might be one of
the numbers you might want to change.


Mag Gam wrote:
> Hello All,
>
> Running 8.3.4. My situation is a little unique. I am running on a 1
> core with 2GB of memory on Redhat Linux 5.2. My entire installation of
> pgsql is about 8GB (compressed) from pgdump.  I have 6 databases The
> data is keep growing since I plan to add more field to my database and
> it will increase dramatically.
>
> My goal is I don't want to use a lot of memory! My storage is faily
> fast, I can do about 250Mb/sec (sustained). I would like to leverage
> my I/O instead of memory, eventhough I will suffer performance
> problems. Also, is it possible to make the database data logs files
> (the binary files) large? I am thinking of making them the size of 1G
> each instead of very small files? My FS does better performance for
> larger files...
>
> Any ideas?
>
> TIA
>
>


--
David Gardner


strange looking error

From
"Wright, George"
Date:
We have several cloned boxes using 8.1.5 Postgresql. Only one got the
following error:

WARNING:  could not write block 1 of 1663/28004/49027
DETAIL:  Multiple failures --- write error may be permanent.
ERROR:  could not open relation 1663/28004/49027: No such file or
directory
CONTEXT:  writing block 1 of relation 1663/28004/49027

We have no numerical named tables.

Any ideas?