Thread: Low Performance for big hospital server ..

Low Performance for big hospital server ..

From
amrit@health2.moph.go.th
Date:
I try to adjust my server for a couple of weeks with some sucess but it still
slow when the server has stress in the moring from many connection . I used
postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb.
Since 1 1/2 yr. when I started to use the database server after optimizing the
postgresql.conf everything went fine until a couple of weeks ago , my database
grew up to 3.5 Gb and there were more than 160 concurent connections.
The server seemed to be slower in the rush hour peroid than before . There
is some swap process too. My top and meminfo are shown here below:

207 processes: 203 sleeping, 4 running, 0 zombie, 0 stopped
CPU0 states:  15.0% user  12.1% system    0.0% nice   0.0% iowait  72.2% idle
CPU1 states:  11.0% user  11.1% system    0.0% nice   0.0% iowait  77.2% idle
CPU2 states:  22.3% user  27.3% system    0.0% nice   0.0% iowait  49.3% idle
CPU3 states:  15.4% user  13.0% system    0.0% nice   0.0% iowait  70.4% idle
Mem:  4124720k av, 4085724k used,   38996k free,       0k shrd,   59012k buff
                   3141420k actv,   48684k in_d,   76596k in_c
Swap: 20370412k av,   46556k used, 20323856k free                 3493136k
cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
16708 postgres  15   0  264M 264M  261M S    14.7  6.5   0:18   2 postmaster
16685 postgres  15   0  264M 264M  261M S    14.5  6.5   1:22   0 postmaster
16690 postgres  15   0  264M 264M  261M S    13.7  6.5   1:35   3 postmaster
16692 postgres  15   0  264M 264M  261M S    13.3  6.5   0:49   1 postmaster
16323 postgres  16   0  264M 264M  261M R    11.1  6.5   1:48   2 postmaster
16555 postgres  15   0  264M 264M  261M S     9.7  6.5   1:52   3 postmaster
16669 postgres  15   0  264M 264M  261M S     8.7  6.5   1:58   3 postmaster
16735 postgres  15   0  264M 264M  261M S     7.7  6.5   0:15   0 postmaster
16774 postgres  16   0  256M 256M  254M R     7.5  6.3   0:09   0 postmaster
16247 postgres  15   0  263M 263M  261M S     7.1  6.5   0:46   0 postmaster
16696 postgres  15   0  263M 263M  261M S     6.7  6.5   0:24   1 postmaster
16682 postgres  15   0  264M 264M  261M S     4.3  6.5   1:19   3 postmaster
16726 postgres  15   0  263M 263M  261M S     1.5  6.5   0:21   3 postmaster
   14 root      15   0     0    0     0 RW    1.3  0.0 126:42   1 kscand/HighMem
16766 postgres  15   0  134M 134M  132M S     1.1  3.3   0:01   2 postmaster
16772 postgres  15   0  258M 258M  256M S     1.1  6.4   0:04   1 postmaster
16835 root      15   0  1252 1252   856 R     0.9  0.0   0:00   3 top
 2624 root      24   0 13920 7396  1572 S     0.5  0.1   6:25   1 java
16771 postgres  15   0  263M 263M  261M S     0.5  6.5   0:06   0 postmaster
   26 root      15   0     0    0     0 SW    0.3  0.0   3:24   1 kjournald
 2114 root      15   0   276  268   216 S     0.1  0.0   2:48   2 irqbalance
    1 root      15   0   108   76    56 S     0.0  0.0   0:07   3 init
    2 root      RT   0     0    0     0 SW    0.0  0.0   0:00   0 migration/0
    3 root      RT   0     0    0     0 SW    0.0  0.0   0:00   1 migration/1
    4 root      RT   0     0    0     0 SW    0.0  0.0   0:00   2 migration/2
    5 root      RT   0     0    0     0 SW    0.0  0.0   0:00   3 migration/3
    6 root      15   0     0    0     0 SW    0.0  0.0   0:03   1 keventd

[root@data3 root]# cat < /proc/meminfo
        total:    used:    free:  shared: buffers:  cached:
Mem:  4223713280 4203782144 19931136        0 37982208 3684573184
Swap: 20859301888 65757184 20793544704
MemTotal:      4124720 kB
MemFree:         19464 kB
MemShared:           0 kB
Buffers:         37092 kB
Cached:        3570800 kB
SwapCached:      27416 kB
Active:        3215984 kB
ActiveAnon:     245576 kB
ActiveCache:   2970408 kB
Inact_dirty:    330796 kB
Inact_laundry:  164256 kB
Inact_clean:    160968 kB
Inact_target:   774400 kB
HighTotal:     3276736 kB
HighFree:         1024 kB
LowTotal:       847984 kB
LowFree:         18440 kB
SwapTotal:    20370412 kB
SwapFree:     20306196 kB

[root@data3 root]# cat < /proc/sys/kernel/shmmax
4000000000[root@data3 root]# cat < /proc/sys/kernel/shmall
134217728

max_connections = 165
shared_buffers = 32768
sort_mem = 20480
vacuum_mem = 16384
effective_cache_size = 256900

I still in doubt whether this figture is optimized and putting more ram will
help the system throughtput.

Any idea please . My organization is one oof the big hospital in Thailand
Thanks
Amrit
Thailand


Re: Low Performance for big hospital server ..

From
Mark Kirkwood
Date:
amrit@health2.moph.go.th wrote:

>I try to adjust my server for a couple of weeks with some sucess but it still
>slow when the server has stress in the moring from many connection . I used
>postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb.
>Since 1 1/2 yr. when I started to use the database server after optimizing the
>postgresql.conf everything went fine until a couple of weeks ago , my database
>grew up to 3.5 Gb and there were more than 160 concurent connections.
>The server seemed to be slower in the rush hour peroid than before . There
>is some swap process too. My top and meminfo are shown here below:
>
>
You might just be running low on ram - your sort_mem setting means that
160 connections need about 3.1G. Add to that the 256M for your
shared_buffers and there may not be much left for the os to use
effectively (this could explain the fact that some swap is being used).

Is reducing sort_mem an option ?

regards

Mark


Re: Low Performance for big hospital server ..

From
Michael Adler
Date:
On Sun, Jan 02, 2005 at 09:54:32AM +0700, amrit@health2.moph.go.th wrote:
> postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb.

You may want to try disabling hyperthreading, if you don't mind
rebooting.

> grew up to 3.5 Gb and there were more than 160 concurent connections.

Looks like your growing dataset won't fit in your OS disk cache any
longer. Isolate your most problematic queries and check out their
query plans. I bet you have some sequential scans that used to read
from cache but now need to read the disk. An index may help you.

More RAM wouldn't hurt. =)

 -Mike Adler

Re: Low Performance for big hospital server ..

From
amrit@health2.moph.go.th
Date:
> > postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4
> Gb.
>
> You may want to try disabling hyperthreading, if you don't mind
> rebooting.

Can you give me an idea why should I use the SMP kernel instead of Bigmen kernel
[turn off the hyperthreading]? Will it be better to turn off ?

> > grew up to 3.5 Gb and there were more than 160 concurent connections.
>
> Looks like your growing dataset won't fit in your OS disk cache any
> longer. Isolate your most problematic queries and check out their
> query plans. I bet you have some sequential scans that used to read
> from cache but now need to read the disk. An index may help you.
>
> More RAM wouldn't hurt. =)

I think so that there may be some query load on our programe and I try to locate
it.
But if I reduce the config to :
max_connections = 160
shared_buffers =  2048      [Total = 2.5 Gb.]
sort_mem  = 8192   [Total = 1280 Mb.]
vacuum_mem = 16384
effective_cache_size  = 128897 [= 1007 Mb. = 1 Gb.  ]
Will it be more suitable for my server than before?

Thanks for all comment.
Amrit
Thailand


Re: Low Performance for big hospital server ..

From
Dave Cramer
Date:
The common wisdom of shared buffers is around 6-10% of available memory.
Your proposal below is about 50% of memory.

I'm not sure what the original numbers actually meant, they are quite large.

also effective cache is the sum of kernel buffers + shared_buffers so it
should be bigger than shared buffers.

Also turning hyperthreading off may help, it is unlikely it is doing any
good unless you are running a relatively new (2.6.x) kernel.

I presume you are vacuuming on a regular basis?

amrit@health2.moph.go.th wrote:

>>>postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4
>>>
>>>
>>Gb.
>>
>>You may want to try disabling hyperthreading, if you don't mind
>>rebooting.
>>
>>
>
>Can you give me an idea why should I use the SMP kernel instead of Bigmen kernel
>[turn off the hyperthreading]? Will it be better to turn off ?
>
>
>
>>>grew up to 3.5 Gb and there were more than 160 concurent connections.
>>>
>>>
>>Looks like your growing dataset won't fit in your OS disk cache any
>>longer. Isolate your most problematic queries and check out their
>>query plans. I bet you have some sequential scans that used to read
>>from cache but now need to read the disk. An index may help you.
>>
>>More RAM wouldn't hurt. =)
>>
>>
>
>I think so that there may be some query load on our programe and I try to locate
>it.
>But if I reduce the config to :
>max_connections = 160
>shared_buffers =  2048      [Total = 2.5 Gb.]
>sort_mem  = 8192   [Total = 1280 Mb.]
>vacuum_mem = 16384
>effective_cache_size  = 128897 [= 1007 Mb. = 1 Gb.  ]
>Will it be more suitable for my server than before?
>
>Thanks for all comment.
>Amrit
>Thailand
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 3: if posting/reading through Usenet, please send an appropriate
>      subscribe-nomail command to majordomo@postgresql.org so that your
>      message can get through to the mailing list cleanly
>
>
>
>

--
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561


Re: Low Performance for big hospital server ..

From
amrit@health2.moph.go.th
Date:
> The common wisdom of shared buffers is around 6-10% of available memory.
> Your proposal below is about 50% of memory.
>
> I'm not sure what the original numbers actually meant, they are quite large.
>
I will try to reduce shared buffer to 1536 [1.87 Mb].

> also effective cache is the sum of kernel buffers + shared_buffers so it
> should be bigger than shared buffers.
also make the effective cache to 2097152 [2 Gb].
I will give you the result , because tomorrow [4/12/05] will be the official day
of my hospital [which have more than 1700 OPD patient/day].


> Also turning hyperthreading off may help, it is unlikely it is doing any
> good unless you are running a relatively new (2.6.x) kernel.
Why , could you give me the reason?

> I presume you are vacuuming on a regular basis?
Yes , vacuumdb daily.



Re: Low Performance for big hospital server ..

From
Mark Kirkwood
Date:
amrit@health2.moph.go.th wrote:

>
>max_connections = 160
>shared_buffers =  2048      [Total = 2.5 Gb.]
>sort_mem  = 8192   [Total = 1280 Mb.]
>vacuum_mem = 16384
>effective_cache_size  = 128897 [= 1007 Mb. = 1 Gb.  ]
>Will it be more suitable for my server than before?
>
>
>
>
I would keep shared_buffers in the 10000->20000 range, as this is
allocated *once* into shared memory, so only uses 80->160 Mb in *total*.

The lower sort_mem will help reduce memory pressure (as this is
allocated for every backend connection) and this will help performance -
*unless* you have lots of queries that need to sort large datasets. If
so, then these will hammer your i/o subsystem, possibly canceling any
gain from freeing up more memory. So there is a need to understand what
sort of workload you have!

best wishes

Mark


Re: Low Performance for big hospital server ..

From
amrit@health2.moph.go.th
Date:
> >max_connections = 160
> >shared_buffers =  2048      [Total = 2.5 Gb.]
> >sort_mem  = 8192   [Total = 1280 Mb.]
> >vacuum_mem = 16384
> >effective_cache_size  = 128897 [= 1007 Mb. = 1 Gb.  ]
> >Will it be more suitable for my server than before?
> >
> >
> >
> >
> I would keep shared_buffers in the 10000->20000 range, as this is
> allocated *once* into shared memory, so only uses 80->160 Mb in *total*.

You mean that if I increase the share buffer to arround 12000 [160 comnnections
] , this will not affect the mem. usage ?

> The lower sort_mem will help reduce memory pressure (as this is
> allocated for every backend connection) and this will help performance -
> *unless* you have lots of queries that need to sort large datasets. If
> so, then these will hammer your i/o subsystem, possibly canceling any
> gain from freeing up more memory. So there is a need to understand what
> sort of workload you have!

Will the increasing in effective cache size to arround 200000 make a little bit
improvement ? Do you think so?

Any comment please , thanks.
Amrit
Thailand


Re: Low Performance for big hospital server ..

From
Mark Kirkwood
Date:
amrit@health2.moph.go.th wrote:

>>>max_connections = 160
>>>shared_buffers =  2048      [Total = 2.5 Gb.]
>>>sort_mem  = 8192   [Total = 1280 Mb.]
>>>vacuum_mem = 16384
>>>effective_cache_size  = 128897 [= 1007 Mb. = 1 Gb.  ]
>>>Will it be more suitable for my server than before?
>>>
>>>
>>>
>>>
>>>
>>>
>>I would keep shared_buffers in the 10000->20000 range, as this is
>>allocated *once* into shared memory, so only uses 80->160 Mb in *total*.
>>
>>
>
>You mean that if I increase the share buffer to arround 12000 [160 comnnections
>] , this will not affect the mem. usage ?
>
>
>
shared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is
shared, so no matter how many connections you have it will only use 96M.


>>The lower sort_mem will help reduce memory pressure (as this is
>>allocated for every backend connection) and this will help performance -
>>*unless* you have lots of queries that need to sort large datasets. If
>>so, then these will hammer your i/o subsystem, possibly canceling any
>>gain from freeing up more memory. So there is a need to understand what
>>sort of workload you have!
>>
>>
>
>Will the increasing in effective cache size to arround 200000 make a little bit
>improvement ? Do you think so?
>
>
>
I would leave it at the figure you proposed (128897), and monitor your
performance.
(you can always increase it later and see what the effect is).

regards

Mark


Re: Low Performance for big hospital server ..

From
amrit@health2.moph.go.th
Date:
> shared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is
> shared, so no matter how many connections you have it will only use 96M.

Now I use the figure of 27853

> >
> >Will the increasing in effective cache size to arround 200000 make a little
> bit
> >improvement ? Do you think so?
> >
Decrease the sort mem too much [8196] make the performance much slower so I use
sort_mem = 16384
and leave effective cache to the same value , the result is quite better but I
should wait for tomorrow morning [official hour]  to see the end result.

> >
> I would leave it at the figure you proposed (128897), and monitor your
> performance.
> (you can always increase it later and see what the effect is).
Yes , I use this figure.

If the result still poor , putting more ram "6-8Gb" [also putting more money
too] will solve the problem ?
Thanks ,
Amrit
Thailand


Re: Low Performance for big hospital server ..

From
William Yu
Date:
amrit@health2.moph.go.th wrote:
> I will try to reduce shared buffer to 1536 [1.87 Mb].

1536 is probaby too low. I've tested a bunch of different settings on my
  8GB Opteron server and 10K seems to be the best setting.


>>also effective cache is the sum of kernel buffers + shared_buffers so it
>>should be bigger than shared buffers.
>
> also make the effective cache to 2097152 [2 Gb].
> I will give you the result , because tomorrow [4/12/05] will be the official day
> of my hospital [which have more than 1700 OPD patient/day].

To figure out your effective cache size, run top and add free+cached.


>>Also turning hyperthreading off may help, it is unlikely it is doing any
>>good unless you are running a relatively new (2.6.x) kernel.
>
> Why , could you give me the reason?

Pre 2.6, the kernel does not know the difference between logical and
physical CPUs. Hence, in a dual processor system with hyperthreading, it
actually sees 4 CPUs. And when assigning processes to CPUs, it may
assign to 2 logical CPUs in the same physical CPU.


>
>
>>I presume you are vacuuming on a regular basis?
>
> Yes , vacuumdb daily.

Do you vacuum table by table or the entire DB? I find over time, the
system tables can get very bloated and cause a lot of slowdowns just due
to schema queries/updates. You might want to try a VACUUM FULL ANALYZE
just on the system tables.

Re: Low Performance for big hospital server ..

From
Mike Mascari
Date:
William Yu wrote:
> amrit@health2.moph.go.th wrote:
>> Yes , vacuumdb daily.
>
> Do you vacuum table by table or the entire DB? I find over time, the
> system tables can get very bloated and cause a lot of slowdowns just due
> to schema queries/updates. You might want to try a VACUUM FULL ANALYZE
> just on the system tables.

A REINDEX of the system tables in stand-alone mode might also be in
order, even for a 7.4.x database:

http://www.postgresql.org/docs/7.4/interactive/sql-reindex.html

If a dump-reload-analyze cycle yields significant performance
improvements then we know it's due to dead-tuple bloat - either heap
tuples or index tuples.

Mike Mascari


Re: Low Performance for big hospital server ..

From
"Gregory S. Williamson"
Date:
Amrit --

>-----Original Message-----
>From:    amrit@health2.moph.go.th [mailto:amrit@health2.moph.go.th]
>Sent:    Mon 1/3/2005 12:18 AM
>To:    Mark Kirkwood
>Cc:    PGsql-performance
>Subject:    Re: [PERFORM] Low Performance for big hospital server ..
>> shared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is
>> shared, so no matter how many connections you have it will only use 96M.
>
>Now I use the figure of 27853
>
>> >
>> >Will the increasing in effective cache size to arround 200000 make a >little
>> bit
>> >improvement ? Do you think so?
>> >
>Decrease the sort mem too much [8196] make the performance much slower so I >use
>sort_mem = 16384
>and leave effective cache to the same value , the result is quite better but >I
>should wait for tomorrow morning [official hour]  to see the end result.
>
>> >
>> I would leave it at the figure you proposed (128897), and monitor your
>> performance.
>> (you can always increase it later and see what the effect is).
>Yes , I use this figure.
>
>If the result still poor , putting more ram "6-8Gb" [also putting more money
>too] will solve the problem ?

Adding RAM will almost always help, at least for a while. Our small runitme servers have 2 gigs of RAM; the larger ones
have4 gigs; I do anticipate the need to add RAM as we add users. 

If you have evaluated the queries that are running and verified that they are using indexes properly, etc., and tuned
theother parameters for your system and its disks, adding memory helps because it increases the chance that data is
alreadyin memory, thus saving the time to fetch it from disk. Studying performance under load with top, vmstat, etc.
anddetailed analysis of queries can often trade some human time for the money that extra hardware would cost. Sometimes
easierto do than getting downtime for a critical server, as well. 

If you don't have a reliable way of reproducing real loads on a test system, it is best to change things cautiously,
andobserve the system under load; if you change too many things (ideally only 1 at a time but often that is not
possible)you mau actually defeat a good change with a bad one; at the least,m you may not know which change was the
mostimportant one if you make several at once. 

Best of luck,

Greg Williamson
DBA
GlobeXplorer LLC
>Thanks ,
>Amrit
>Thailand


---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to majordomo@postgresql.org so that your
      message can get through to the mailing list cleanly




Re: Low Performance for big hospital server ..

From
Pierre-Frédéric Caillaud
Date:

> Decrease the sort mem too much [8196] make the performance much slower
> so I use
> sort_mem = 16384
> and leave effective cache to the same value , the result is quite better
> but I
> should wait for tomorrow morning [official hour]  to see the end result.

    You could also profile your queries to see where those big sorts come
from, and maybe add some indexes to try to replace sorts by
index-scans-in-order, which use no temporary memory. Can you give an
example of your queries which make use of big sorts like this ?

Re: Low Performance for big hospital server ..

From
Dave Cramer
Date:

William Yu wrote:

> amrit@health2.moph.go.th wrote:
>
>> I will try to reduce shared buffer to 1536 [1.87 Mb].
>
>
> 1536 is probaby too low. I've tested a bunch of different settings on
> my  8GB Opteron server and 10K seems to be the best setting.

Be careful here, he is not using opterons which can access physical
memory above 4G efficiently. Also he only has 4G the 6-10% rule still
applies

>
>
>>> also effective cache is the sum of kernel buffers + shared_buffers
>>> so it
>>> should be bigger than shared buffers.
>>
>>
>> also make the effective cache to 2097152 [2 Gb].
>> I will give you the result , because tomorrow [4/12/05] will be the
>> official day
>> of my hospital [which have more than 1700 OPD patient/day].
>
>
> To figure out your effective cache size, run top and add free+cached.

My understanding is that effective cache is the sum of shared buffers,
plus kernel buffers, not sure what free + cached gives you?

>
>
>>> Also turning hyperthreading off may help, it is unlikely it is doing
>>> any
>>> good unless you are running a relatively new (2.6.x) kernel.
>>
>>
>> Why , could you give me the reason?
>
>
> Pre 2.6, the kernel does not know the difference between logical and
> physical CPUs. Hence, in a dual processor system with hyperthreading,
> it actually sees 4 CPUs. And when assigning processes to CPUs, it may
> assign to 2 logical CPUs in the same physical CPU.

Right, the pre 2.6 kernels don't really know how to handle hyperthreaded
CPU's

>
>
>>
>>
>>> I presume you are vacuuming on a regular basis?
>>
>>
>> Yes , vacuumdb daily.
>
>
> Do you vacuum table by table or the entire DB? I find over time, the
> system tables can get very bloated and cause a lot of slowdowns just
> due to schema queries/updates. You might want to try a VACUUM FULL
> ANALYZE just on the system tables.

You may want to try this but regular vacuum analyze should work fine as
long as you have the free space map settings correct. Also be aware that
pre-7.4.x the free space map is not populated on startup so you should
do a vacuum analyze right after startup.

>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>
>

--
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561


Re: Low Performance for big hospital server ..

From
Dave Cramer
Date:
Amrit,

I realize you may be stuck with 7.3.x but you should be aware that 7.4 is considerably faster, and 8.0 appears to be even faster yet.

I would seriously consider upgrading, if at all possible.

A few more hints.

Random page cost is quite conservative if you have reasonably fast disks.
Speaking of fast disks, not all disks are created equal, some RAID drives are quite slow (Bonnie++ is your friend here)

Sort memory can be set on a per query basis, I'd consider lowering it quite low and only increasing it when necessary.

Which brings us to how to find out when it is necessary.
Turn logging on and turn on log_pid, and log_duration, then you will need to sort through the logs to find the slow queries.

There are some special cases where postgresql can be quite slow, and minor adjustments to the query can improve it significantly

For instance pre-8.0 select * from foo where id = '1'; where id is a int8 will never use an index even if it exists.


Regards,

Dave


amrit@health2.moph.go.th wrote:
The common wisdom of shared buffers is around 6-10% of available memory.
Your proposal below is about 50% of memory.

I'm not sure what the original numbers actually meant, they are quite large.
   
I will try to reduce shared buffer to 1536 [1.87 Mb].
 
also effective cache is the sum of kernel buffers + shared_buffers so it
should be bigger than shared buffers.   
also make the effective cache to 2097152 [2 Gb].
I will give you the result , because tomorrow [4/12/05] will be the official day
of my hospital [which have more than 1700 OPD patient/day].

 
Also turning hyperthreading off may help, it is unlikely it is doing any
good unless you are running a relatively new (2.6.x) kernel.   
Why , could you give me the reason?
 
I presume you are vacuuming on a regular basis?   
Yes , vacuumdb daily.



 

-- 
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561

Re: Low Performance for big hospital server ..

From
"Merlin Moncure"
Date:
amrit wrote:
> I try to adjust my server for a couple of weeks with some sucess but
it
> still
> slow when the server has stress in the moring from many connection . I
> used
> postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of
4
> Gb.
> Since 1 1/2 yr. when I started to use the database server after
optimizing
> the
> postgresql.conf everything went fine until a couple of weeks ago , my
> database
> grew up to 3.5 Gb and there were more than 160 concurent connections.
> The server seemed to be slower in the rush hour peroid than before .
There
> is some swap process too. My top and meminfo are shown here below:

well, you've hit the 'wall'...your system seems to be more or less at
the limit of what 32 bit technology can deliver.  If upgrade to Opteron
and 64 bit is out of the question, here are a couple of new tactics you
can try.  Optimizing postgresql.conf can help, but only so much.

Optimize queries:
One big often looked performance gainer is to use functional indexes to
access data from a table.  This can save space by making the index
smaller and more efficient.  This wins on cache and speed at the price
of some flexibility.

Optimize datums:  replace numeric(4) with int2, numeric(6) with int4,
etc.  This will save a little space on the tuple which will ease up on
the cache a bit.  Use constraints where necessary to preserve data
integrity.

Materialized views:  These can provide an enormous win if you can deal
incorporate them into your application.  With normal views, multiple
backends can share a query plan.  With mat-views, backends can share
both the plan and its execution.

Merlin


Re: Low Performance for big hospital server ..

From
amrit@health2.moph.go.th
Date:
> I realize you may be stuck with 7.3.x but you should be aware that 7.4
> is considerably faster, and 8.0 appears to be even faster yet.

There are a little bit incompatibility between 7.3 -8 , so rather difficult to
change.

> I would seriously consider upgrading, if at all possible.
>
> A few more hints.
>
> Random page cost is quite conservative if you have reasonably fast disks.
> Speaking of fast disks, not all disks are created equal, some RAID
> drives are quite slow (Bonnie++ is your friend here)
>
> Sort memory can be set on a per query basis, I'd consider lowering it
> quite low and only increasing it when necessary.
>
> Which brings us to how to find out when it is necessary.
> Turn logging on and turn on log_pid, and log_duration, then you will
> need to sort through the logs to find the slow queries.

In standard RH 9.0 , if I enable both of the log [pid , duration] , where could
I look for the result of the log, and would it make the system to be slower?


Amrit
Thailand


Re: Low Performance for big hospital server ..

From
Robert Treat
Date:
On Monday 03 January 2005 10:40, amrit@health2.moph.go.th wrote:
> > I realize you may be stuck with 7.3.x but you should be aware that 7.4
> > is considerably faster, and 8.0 appears to be even faster yet.
>
> There are a little bit incompatibility between 7.3 -8 , so rather difficult
> to change.
>

Sure, but even moving to 7.4 would be a bonus, especially if you use a lot of
select * from tab where id in (select ... ) type queries, and the
incompataibility is less as well.

> > I would seriously consider upgrading, if at all possible.
> >
> > A few more hints.
> >

One thing I didn't see mentioned that should have been was to watch for index
bloat, which was a real problem on 7.3 machines.  You can determine which
indexes are bloated by studying vacuum output or by comparing index size on
disk to table size on disk.

Another thing I didn't see mentioned was to your free space map settings.
Make sure these are large enough to hold your data... max_fsm_relations
should be larger then the total # of tables you have in your system (check
the archives for the exact query needed) and max_fsm_pages needs to be big
enough to hold all of the pages you use in a day... this is hard to calculate
in 7.3, but if you look at your vacuum output and add the number of pages
cleaned up for all tables, this could give you a good number to work with. It
would certainly tell you if your setting is too small.

--
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

Re: Low Performance for big hospital server ..

From
William Yu
Date:
Dave Cramer wrote:

>
>
> William Yu wrote:
>
>> amrit@health2.moph.go.th wrote:
>>
>>> I will try to reduce shared buffer to 1536 [1.87 Mb].
>>
>>
>>
>> 1536 is probaby too low. I've tested a bunch of different settings on
>> my  8GB Opteron server and 10K seems to be the best setting.
>
>
> Be careful here, he is not using opterons which can access physical
> memory above 4G efficiently. Also he only has 4G the 6-10% rule still
> applies

10% of 4GB is 400MB. 10K buffers is 80MB. Easily less than the 6-10% rule.


>> To figure out your effective cache size, run top and add free+cached.
>
>
> My understanding is that effective cache is the sum of shared buffers,
> plus kernel buffers, not sure what free + cached gives you?

Not true. Effective cache size is the free memory available that the OS
can use for caching for Postgres. In a system that runs nothing but
Postgres, it's free + cached.

Re: Low Performance for big hospital server ..

From
Dave Cramer
Date:

amrit@health2.moph.go.th wrote:

>>I realize you may be stuck with 7.3.x but you should be aware that 7.4
>>is considerably faster, and 8.0 appears to be even faster yet.
>>
>>
>
>There are a little bit incompatibility between 7.3 -8 , so rather difficult to
>change.
>
>
>
>>I would seriously consider upgrading, if at all possible.
>>
>>A few more hints.
>>
>>Random page cost is quite conservative if you have reasonably fast disks.
>>Speaking of fast disks, not all disks are created equal, some RAID
>>drives are quite slow (Bonnie++ is your friend here)
>>
>>Sort memory can be set on a per query basis, I'd consider lowering it
>>quite low and only increasing it when necessary.
>>
>>Which brings us to how to find out when it is necessary.
>>Turn logging on and turn on log_pid, and log_duration, then you will
>>need to sort through the logs to find the slow queries.
>>
>>
>
>In standard RH 9.0 , if I enable both of the log [pid , duration] , where could
>I look for the result of the log, and would it make the system to be slower?
>
>
On a redhat system logging is more or less disabled if you used the rpm

you can set syslog=2 in the postgresql.conf and then you will get the
logs in messages.log
Yes, it will make it slower, but you have to find out which queries are
slow.

Dave

>
>Amrit
>Thailand
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 5: Have you checked our extensive FAQ?
>
>               http://www.postgresql.org/docs/faqs/FAQ.html
>
>
>
>

--
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561


Re: Low Performance for big hospital server ..

From
Dave Cramer
Date:

William Yu wrote:

> Dave Cramer wrote:
>
>>
>>
>> William Yu wrote:
>>
>>> amrit@health2.moph.go.th wrote:
>>>
>>>> I will try to reduce shared buffer to 1536 [1.87 Mb].
>>>
>>>
>>>
>>>
>>> 1536 is probaby too low. I've tested a bunch of different settings
>>> on my  8GB Opteron server and 10K seems to be the best setting.
>>
>>
>>
>> Be careful here, he is not using opterons which can access physical
>> memory above 4G efficiently. Also he only has 4G the 6-10% rule still
>> applies
>
>
> 10% of 4GB is 400MB. 10K buffers is 80MB. Easily less than the 6-10%
> rule.
>
Correct, I didn't actually do the math, I refrain from giving actual
numbers as every system is different.

>
>>> To figure out your effective cache size, run top and add free+cached.
>>
>>
>>
>> My understanding is that effective cache is the sum of shared
>> buffers, plus kernel buffers, not sure what free + cached gives you?
>
>
> Not true. Effective cache size is the free memory available that the
> OS can use for caching for Postgres. In a system that runs nothing but
> Postgres, it's free + cached.

You still need to add in the shared buffers as they are part of the
"effective cache"

Dave

>
> ---------------------------(end of broadcast)---------------------------
> TIP 8: explain analyze is your friend
>
>

--
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561


Re: Low Performance for big hospital server ..

From
amrit@health2.moph.go.th
Date:
Today is the first official day of this weeks and the system run better in
serveral points but there are still some points that need to be corrected. Some
queries or some tables are very slow. I think the queries inside the programe
need to be rewrite.
Now I put the sort mem to a little bit bigger:
sort mem = 16384       increase  the sort mem makes no effect on the slow point
eventhough there is little connnection.
shared_buffers = 27853
effective cache = 120000

I will put more ram but someone said RH 9.0 had poor recognition on the Ram
above 4 Gb?
Should I close the hyperthreading ? Would it make any differnce between open and
close the hyperthreading?
Thanks for any comment
Amrit
Thailand

Re: Low Performance for big hospital server ..

From
Gavin Sherry
Date:
On Tue, 4 Jan 2005 amrit@health2.moph.go.th wrote:

> Today is the first official day of this weeks and the system run better in
> serveral points but there are still some points that need to be corrected. Some
> queries or some tables are very slow. I think the queries inside the programe
> need to be rewrite.
> Now I put the sort mem to a little bit bigger:
> sort mem = 16384       increase  the sort mem makes no effect on the slow point
> eventhough there is little connnection.
> shared_buffers = 27853
> effective cache = 120000

Even though others have said otherwise, I've had good results from setting
sort_mem higher -- even if that is per query.

>
> I will put more ram but someone said RH 9.0 had poor recognition on the Ram
> above 4 Gb?

I think they were refering to 32 bit architectures, not distributions as
such.

> Should I close the hyperthreading ? Would it make any differnce between open and
> close the hyperthreading?
> Thanks for any comment

In my experience, the largest performance increases come from intensive
analysis and optimisation of queries. Look at the output of EXPLAIN
ANALYZE for the queries your application is generating and see if they can
be tuned in anyway. More often than not, they can.

Feel free to ask for assistence on irc at irc.freenode.net #postgresql.
People there help optimise queries all day ;-).

> Amrit
> Thailand

Gavin

Re: Low Performance for big hospital server ..

From
amrit@health2.moph.go.th
Date:
> > I will put more ram but someone said RH 9.0 had poor recognition on the Ram
> > above 4 Gb?
>
> I think they were refering to 32 bit architectures, not distributions as
> such.

Sorry for wrong reason , then should I increase more RAM than 4 Gb. on 32 bit
Arche.?

> > Should I close the hyperthreading ? Would it make any differnce between
> open and
> > close the hyperthreading?
> > Thanks for any comment
>
> In my experience, the largest performance increases come from intensive
> analysis and optimisation of queries. Look at the output of EXPLAIN
> ANALYZE for the queries your application is generating and see if they can
> be tuned in anyway. More often than not, they can.

So what you mean is that the result is the same whether  close or open
hyperthreading ?
Will it be any harm if I open it ?
The main point shiuld be adjustment the query , right.

> Feel free to ask for assistence on irc at irc.freenode.net #postgresql.
> People there help optimise queries all day ;-).

How could I contact with those people ;=> which url ?
Thanks again.
Amrit
Thailand

Re: Low Performance for big hospital server ..

From
amrit@health2.moph.go.th
Date:
> > Today is the first official day of this weeks and the system run
> > better in serveral points but there are still some points that need to
> > be corrected. Some queries or some tables are very slow. I think the
> > queries inside the programe need to be rewrite.
> > Now I put the sort mem to a little bit bigger:
> > sort mem = 16384       increase  the sort mem makes no effect on the
> > slow point eventhough there is little connnection.
> > shared_buffers = 27853
> > effective cache = 120000

>   If I were you I would upgrade from RH 9 to Fedora Core 2 or 3 after
>   some initial testing.  You'll see a huge improvement of speed on the
>   system as a whole.  I would try turning hyperthreading off also.


Now I turn hyperthreading off and readjust the conf . I found the bulb query
that was :
update one flag of the table [8 million records which I think not too much]
.When I turned this query off everything went fine.
I don't know whether update the data is much slower than insert [Postgresql
7.3.2] and how could we improve the update method?
Thanks for many helps.
Amrit
Thailand

NB. I would like to give my appreciation to all of the volunteers from many
countries who combat with big disaster [Tsunamies] in my country [Thailand].

Re: Low Performance for big hospital server ..

From
Dave Cramer
Date:
Amrit,

can you post

explain <your slow update query>

so we can see what it does ?

Dave

amrit@health2.moph.go.th wrote:
Today is the first official day of this weeks and the system run
better in serveral points but there are still some points that need to
be corrected. Some queries or some tables are very slow. I think the
queries inside the programe need to be rewrite.
Now I put the sort mem to a little bit bigger:
sort mem = 16384       increase  the sort mem makes no effect on the
slow point eventhough there is little connnection.
shared_buffers = 27853
effective cache = 120000     
 
  If I were you I would upgrade from RH 9 to Fedora Core 2 or 3 after some initial testing.  You'll see a huge improvement of speed on the system as a whole.  I would try turning hyperthreading off also.   

Now I turn hyperthreading off and readjust the conf . I found the bulb query
that was :
update one flag of the table [8 million records which I think not too much]
.When I turned this query off everything went fine.
I don't know whether update the data is much slower than insert [Postgresql
7.3.2] and how could we improve the update method?
Thanks for many helps.
Amrit
Thailand

NB. I would like to give my appreciation to all of the volunteers from many
countries who combat with big disaster [Tsunamies] in my country [Thailand].

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

 

-- 
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561

Re: Low Performance for big hospital server ..

From
William Yu
Date:
amrit@health2.moph.go.th wrote:
> Now I turn hyperthreading off and readjust the conf . I found the bulb query
> that was :
> update one flag of the table [8 million records which I think not too much]
> .When I turned this query off everything went fine.
> I don't know whether update the data is much slower than insert [Postgresql
> 7.3.2] and how could we improve the update method?

UPDATE is expensive. Under a MVCC setup, it's roughtly the equivalent of
DELETE + INSERT new record (ie, old record deprecated, new version of
record. Updating 8 million records would be very I/O intensive and
probably flushes your OS cache so all other queries hit disk versus
superfast memory. And if this operation is run multiple times during the
day, you may end up with a lot of dead tuples in the table which makes
querying it deadly slow.

If it's a dead tuples issue, you probably have to increase your
freespace map and vacuum analyze that specific table more often. If it's
an I/O hit issue, a lazy updating procedure would help if the operation
is not time critical (eg. load the record keys that need updating and
loop through the records with a time delay.)

Re: Low Performance for big hospital server ..

From
Dawid Kuroczko
Date:
On Wed,  5 Jan 2005 22:35:42 +0700, amrit@health2.moph.go.th
<amrit@health2.moph.go.th> wrote:
> Now I turn hyperthreading off and readjust the conf . I found the bulb query
> that was :
> update one flag of the table [8 million records which I think not too much]

Ahh, the huge update.  Below are my "hints" I've
found while trying to optimize such updates.

First of all, does this update really changes this 'flag'?
Say, you have update:
UPDATE foo SET flag = 4 WHERE [blah];
are you sure, that flag always is different than 4?
If not, then add:
UPDATE foo SET flag = 4 WHERE flag <> 4 AND [blah];
This makes sure only tuples which actually need the change will
receive it.  [ IIRC mySQL does this, while PgSQL will always perform
UPDATE, regardless if it changes or not ];

Divide the update, if possible.  This way query uses
less memory and you may call VACUUM inbetween
updates.  To do this, first SELECT INTO TEMPORARY
table the list of rows to update (their ids or something),
and then loop through it to update the values.

I guess the problem with huge updates is that
until the update is finished, the new tuples are
not visible, so the old cannot be freed...

   Regards,
      Dawid

Re: Low Performance for big hospital server ..

From
amrit@health2.moph.go.th
Date:
> Ahh, the huge update.  Below are my "hints" I've
> found while trying to optimize such updates.
>
> First of all, does this update really changes this 'flag'?
> Say, you have update:
> UPDATE foo SET flag = 4 WHERE [blah];
> are you sure, that flag always is different than 4?
> If not, then add:
> UPDATE foo SET flag = 4 WHERE flag <> 4 AND [blah];
> This makes sure only tuples which actually need the change will
> receive it.  [ IIRC mySQL does this, while PgSQL will always perform
> UPDATE, regardless if it changes or not ];
>
> Divide the update, if possible.  This way query uses
> less memory and you may call VACUUM inbetween
> updates.  To do this, first SELECT INTO TEMPORARY
> table the list of rows to update (their ids or something),
> and then loop through it to update the values.
>
> I guess the problem with huge updates is that
> until the update is finished, the new tuples are
> not visible, so the old cannot be freed...

Yes, very good point I must try this and I will give you the result , thanks a
lot.
Amrit
Thailand


Re: Low Performance for big hospital server ..

From
Josh Berkus
Date:
Dawid,

> Ahh, the huge update.  Below are my "hints" I've
> found while trying to optimize such updates.
> Divide the update, if possible.  This way query uses
> less memory and you may call VACUUM inbetween
> updates.  To do this, first SELECT INTO TEMPORARY
> table the list of rows to update (their ids or something),
> and then loop through it to update the values.

There are other ways to deal as well -- one by normalizing the database.
Often, I find that massive updates like this are caused by a denormalized
database.

For example, Lyris stores its "mailing numbers" only as repeated numbers in
the recipients table.   When a mailing is complete, Lyris updates all of the
recipients .... up to 750,000 rows in the case of my client ... to indicate
the completion of the mailing (it's actually a little more complicated than
that, but the essential problem is the example)

It would be far better for Lyris to use a seperate mailings table, with a
status in that table ... which would then require only *one* update row to
indicate completion, instead of 750,000.

I can't tell you how many times I've seen this sort of thing.   And the
developers always tell me "Well, we denormalized for performance reasons ...
"

--
Josh Berkus
Aglio Database Solutions
San Francisco

Re: Low Performance for big hospital server ..

From
Frank Wiles
Date:
On Thu, 6 Jan 2005 09:06:55 -0800
Josh Berkus <josh@agliodbs.com> wrote:

> I can't tell you how many times I've seen this sort of thing.   And
> the developers always tell me "Well, we denormalized for performance
> reasons ... "

  Now that's rich.  I don't think I've ever seen a database perform
  worse after it was normalized.  In fact, I can't even think of a
  situation where it could!

 ---------------------------------
   Frank Wiles <frank@wiles.org>
   http://www.wiles.org
 ---------------------------------


Re: Low Performance for big hospital server ..

From
Dave Cramer
Date:
Reading can be worse for a normalized db, which is likely what the
developers were concerned about.

One always have to be careful to measure the right thing.

Dave

Frank Wiles wrote:

>On Thu, 6 Jan 2005 09:06:55 -0800
>Josh Berkus <josh@agliodbs.com> wrote:
>
>
>
>>I can't tell you how many times I've seen this sort of thing.   And
>>the developers always tell me "Well, we denormalized for performance
>>reasons ... "
>>
>>
>
>  Now that's rich.  I don't think I've ever seen a database perform
>  worse after it was normalized.  In fact, I can't even think of a
>  situation where it could!
>
> ---------------------------------
>   Frank Wiles <frank@wiles.org>
>   http://www.wiles.org
> ---------------------------------
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 3: if posting/reading through Usenet, please send an appropriate
>      subscribe-nomail command to majordomo@postgresql.org so that your
>      message can get through to the mailing list cleanly
>
>
>
>

--
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561


Re: Denormalization WAS: Low Performance for big hospital server ..

From
Josh Berkus
Date:
Frank,

>   Now that's rich.  I don't think I've ever seen a database perform
>   worse after it was normalized.  In fact, I can't even think of a
>   situation where it could!

Oh, there are some.    For example, Primer's issues around his dating
database; it turned out that a fully normalized design resulted in very bad
select performance because of the number of joins involved.  Of course, the
method that did perform well was *not* a simple denormalization, either.

The issue with denormalization is, I think, that a lot of developers cut their
teeth on the likes of MS Access, Sybase 2 or Informix 1.0, where a
poor-performing join often didn't complete at all.   As a result, they got
into the habit of "preemptive tuning"; that is, doing things "for performance
reasons" when the system was still in the design phase, before they even know
what the performance issues *were*.

Not that this was a good practice even then, but the average software project
allocates grossly inadequate time for testing, so you can see how it became a
bad habit.  And most younger DBAs learn their skills on the job from the
older DBAs, so the misinformation gets passed down.

--
Josh Berkus
Aglio Database Solutions
San Francisco

Re: Low Performance for big hospital server ..

From
Rod Taylor
Date:
On Thu, 2005-01-06 at 12:35 -0500, Dave Cramer wrote:
> Reading can be worse for a normalized db, which is likely what the
> developers were concerned about.

To a point. Once you have enough data that you start running out of
space in memory then normalization starts to rapidly gain ground again
because it's often smaller in size and won't hit the disk as much.

Moral of the story is don't tune with a smaller database than you expect
to have.

> Frank Wiles wrote:
>
> >On Thu, 6 Jan 2005 09:06:55 -0800
> >Josh Berkus <josh@agliodbs.com> wrote:
> >
> >
> >
> >>I can't tell you how many times I've seen this sort of thing.   And
> >>the developers always tell me "Well, we denormalized for performance
> >>reasons ... "
> >>
> >>
> >
> >  Now that's rich.  I don't think I've ever seen a database perform
> >  worse after it was normalized.  In fact, I can't even think of a
> >  situation where it could!
> >
> > ---------------------------------
> >   Frank Wiles <frank@wiles.org>
> >   http://www.wiles.org
> > ---------------------------------
> >
> >
> >---------------------------(end of broadcast)---------------------------
> >TIP 3: if posting/reading through Usenet, please send an appropriate
> >      subscribe-nomail command to majordomo@postgresql.org so that your
> >      message can get through to the mailing list cleanly
> >
> >
> >
> >
>
--


Re: Denormalization WAS: Low Performance for big hospital

From
Frank Wiles
Date:
On Thu, 6 Jan 2005 09:38:45 -0800
Josh Berkus <josh@agliodbs.com> wrote:

> Frank,
>
> >   Now that's rich.  I don't think I've ever seen a database perform
> >   worse after it was normalized.  In fact, I can't even think of a
> >   situation where it could!
>
> Oh, there are some.    For example, Primer's issues around his dating
> database; it turned out that a fully normalized design resulted in
> very bad select performance because of the number of joins involved.
> Of course, the method that did perform well was *not* a simple
> denormalization, either.
>
> The issue with denormalization is, I think, that a lot of developers
> cut their teeth on the likes of MS Access, Sybase 2 or Informix 1.0,
> where a poor-performing join often didn't complete at all.   As a
> result, they got into the habit of "preemptive tuning"; that is, doing
> things "for performance reasons" when the system was still in the
> design phase, before they even know what the performance issues
> *were*.
>
> Not that this was a good practice even then, but the average software
> project allocates grossly inadequate time for testing, so you can see
> how it became a bad habit.  And most younger DBAs learn their skills
> on the job from the older DBAs, so the misinformation gets passed
> down.

  Yeah the more I thought about it I had a fraud detection system I
  built for a phone company years ago that when completely normalized
  couldn't get the sub-second response the users wanted. It was Oracle
  and we didn't have the best DBA in the world.

  I ended up having to push about 20% of the deep call details into
  flat files and surprisingly enough it was faster to grep the flat
  files than use the database, because as was previously mentioned
  all of the joins.

 ---------------------------------
   Frank Wiles <frank@wiles.org>
   http://www.wiles.org
 ---------------------------------


Re: Low Performance for big hospital server ..

From
Richard_D_Levine@raytheon.com
Date:
In my younger days I denormalized a database for performance reasons and
have been paid for it dearly with increased maintenance costs.  Adding
enhanced capabilities and new functionality will render denormalization
worse than useless quickly.  --Rick



                         
                      Frank Wiles
                         
                      <frank@wiles.org>                  To:       Josh Berkus <josh@agliodbs.com>
                         
                      Sent by:                           cc:       pgsql-performance@postgresql.org
                         
                      pgsql-performance-owner@pos        Subject:  Re: [PERFORM] Low Performance for big hospital
server..                       
                      tgresql.org
                         

                         

                         
                      01/06/2005 12:12 PM
                         

                         

                         




On Thu, 6 Jan 2005 09:06:55 -0800
Josh Berkus <josh@agliodbs.com> wrote:

> I can't tell you how many times I've seen this sort of thing.   And
> the developers always tell me "Well, we denormalized for performance
> reasons ... "

  Now that's rich.  I don't think I've ever seen a database perform
  worse after it was normalized.  In fact, I can't even think of a
  situation where it could!

 ---------------------------------
   Frank Wiles <frank@wiles.org>
   http://www.wiles.org
 ---------------------------------


---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to majordomo@postgresql.org so that your
      message can get through to the mailing list cleanly




Re: Low Performance for big hospital server ..

From
Yann Michel
Date:
Hi

On Thu, Jan 06, 2005 at 12:51:14PM -0500, Rod Taylor wrote:
> On Thu, 2005-01-06 at 12:35 -0500, Dave Cramer wrote:
> > Reading can be worse for a normalized db, which is likely what the
> > developers were concerned about.
>
> To a point. Once you have enough data that you start running out of
> space in memory then normalization starts to rapidly gain ground again
> because it's often smaller in size and won't hit the disk as much.

Well, in datawarehousing applications you'll often denormalize your
entities due to most of the time the access method is a (more or less)
simple select.

Regards,
Yann

Re: Low Performance for big hospital server ..

From
Greg Stark
Date:
Frank Wiles <frank@wiles.org> writes:

>   Now that's rich.  I don't think I've ever seen a database perform
>   worse after it was normalized.  In fact, I can't even think of a
>   situation where it could!

Just remember. All generalisations are false.

--
greg

Re: Low Performance for big hospital server ..

From
"Joshua D. Drake"
Date:
Greg Stark wrote:
> Frank Wiles <frank@wiles.org> writes:
>
>
>>  Now that's rich.  I don't think I've ever seen a database perform
>>  worse after it was normalized.  In fact, I can't even think of a
>>  situation where it could!
>
>
> Just remember. All generalisations are false.

In general, I would agree.

Sincerely,

Joshua D. Drake



>


--
Command Prompt, Inc., home of PostgreSQL Replication, and plPHP.
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - jd@commandprompt.com - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL

Attachment