W liście z śro, 05-11-2003, godz. 14:48, Jeff pisze:
> On 05 Nov 2003 14:33:33 +0100
> Marek Florianczyk <franki@tpi.pl> wrote:
>
> >
> > During this test I was changing some parameters in postgres, and send
> > kill -HUP ( pg_ctl reload ). I still don't know what settings will be
> > best for me, except "shared buffers", and some kernel and shell
> > settings.
> >
>
> as far as I know, -HUP won't make things like shared buffer changes
> take. you need a full restart of PG.
> ..
> but your numbers are different... I guess it did take. huh.
Well, I'm not sure, but I only did pg_ctl reload
> ..
>
> how much disk IO is going on during these tests? (vmstat 1)
> Any swapping (also shown in vmstat)
I was watching iostat 1, and it shows about 600 tps, so it's not much,
and when we do raid(1+0) on production machine, disk will go fine.
>
> Where any of these tables analyze'd?
> I see you used no indexes, so on each of your tables it must do a seq
> scan. Try adding an index to your test tables and rerun..
No they weren't analyzed, and I did not indexes specially.
I'm testing postgres to work as sql engine for a hosting environment,
these databases will be used by users=lamers, so many of them will not
do any indexes. I wanted to make a test really close to reality, and see
how many databases I can take on single machine.
One database with 3.000 schemas works better than 3.000 databases, but
there is REAL, BIG problem, and I won't be able to use this solution:
Every query, like "\d table" "\di" takes veeeeeeery long time.
Users have to have phpPgAdmin wich I modified to suit our needs, but now
it doesn't work, not even log-in. If I rewrite phpPgAdmin to log users
without checking all schemas, and tables within schemas, none of users
will be able to examine structure of table. Query like "\d table" from
psql monitor takes about 2-5 MINUTES :(
I see that only option is to create one database for every user, and
monitor traffic, and machine load to see when we need another PC and
another PostgreSQL...
Marek