Re: database tuning - Mailing list pgsql-performance

From Richard Huxton
Subject Re: database tuning
Date
Msg-id 475D0756.6050705@archonet.com
Whole thread Raw
In response to Re: database tuning  ("kelvan" <kicmcewen@windowslive.com>)
List pgsql-performance
kelvan wrote:
> ok heres the thing i dont have a choice i just have to work with whats given

Ah well, it happens to all of us.

> whether it is good or not why i need these overheads is for block
> calculations and and tablespace calculations i have to keep everything in a
> very very small area on the hdd for head reading speed as the server i am
> forced to use is a peice of crap so i need to do my calculations to resolve
> this

Out of curiosity, how are you planning to keep the relevant parts of
PostgreSQL's files at a particular physical location on the disk? I
wasn't aware of any facilities in Mac-OS X for this.

> it is not that i dont know how to do my job i understand effective indexing
> materlized views and all other effects of database tuning is was my major
> aspect in my study i just need to know the numbers to do what i have to do.

Fair enough. See the source-code for full details - start with those
directories I mentioned before.

> i am new to postgres i have used many other database management systems i
> know the over heads for all of them just not this one if someone could
> please be of assisstance.
>
> let me give a breef outlay of what i have without breaking my confidentality
> agreement
>
> mac server mac os 10.x
> postgres 8.2.5 (appologies i just got updated documentation with errors
> fixed in it)
> 70gig hdd
> 5 gig ram
> 4 cpus (not that it matters as postgres is not multi threading)

Hmm - Not enough RAM or disks, too many cpus but you knew that anyway.
Oh, and PG *will* use all 4 CPUs, just one per backend - not all 4 for a
single query. Not a problem in your case.

> and i have to support approxmatally anywhere from 5000 - 30000 users all
> using it concurentally

Hmm 30,000 concurrent users, 5GB RAM = 175kB per user. Not going to
work. You'll want more than that for each connection even if it's
basically idle.

Even if you don't run out of RAM, I can't see how a single disk could
keep up with even a moderate rate of updates from that many users.
Presumably largely read-only?

Maybe you mean 30,000 web-users behind a connection-pool?

How many users have you reached in your testing?

> as you can see this server wouldnt be my first choice (or my last choice)
> but as i said i have not choice at this time.
> the interface programmer and i have come up with ways to solve certian
> problems in preformance that this server produces but i still need to tune
> the database

I don't think it's clear as to how you intend to tune the database with
index page-layout details, particularly since you say you are new to
PostgreSQL.

For example, with your above requirements, I'd be particularly concerned
about four things:
  1. shared_buffers
  2. work_mem
  3. Trading off 1+2 vs the risk of swap
  4. WAL activity / checkpointing impacting on my single disk

It would be interesting to see what conclusions you reached on these,
given that you're pushing the hardware to its limits. Can you share the
results of your testing on these?

--
   Richard Huxton
   Archonet Ltd

pgsql-performance by date:

Previous
From: Heikki Linnakangas
Date:
Subject: Re: Vacuum full since 15 hours
Next
From: "Marko Kreen"
Date:
Subject: Re: Utilizing multiple cores for one query