Re: Is There Any Way .... - Mailing list pgsql-performance

From Kevin Grittner
Subject Re: Is There Any Way ....
Date
Msg-id s3439423.035@gwmta.wicourts.gov
Whole thread Raw
In response to Is There Any Way ....  ("Lane Van Ingen" <lvaningen@esncc.com>)
List pgsql-performance
** Low Priority **

Human feedback from testers and users has proven pretty effective
at catching errors in the "human assisted" cache configuration.  When
people setting up the servers have missed the named cache configuration,
and all they had was the single general purpose cache, it has been caught
because of user complaints on performance.

There was an attempt made to simulate database queries -- hitting a
client side cache on some of the roughly100 tables (out of 300 in the well
normalized schema) which fit this pattern of usage.  It didn't prove very
cost effective.  It just makes more sense to allow the DBAs to tweek
database performance through database configuration changes than to
jump through that many hoops in application code to try to achieve it
where it becomes an issue.

As far as I know, you can't use this technique in Microsoft SQL Server or
Oracle.  They are using Sybase Adaptive Server Enterprise (ASE).  I
believe named caches were added in version 12.0, long after Microsoft
split off with their separate code stream based on the Sybase effort.

-Kevin


>>> "Dario" <dario_d_s@unitech.com.ar> 10/05/05 6:16 AM >>>
I'm sure there will be cases when some human assisted caching algorithm will
perform better than an mathetical statistical based design, but it will also
depend on the "human". And it probably will make thing worse when workload
changes and human doesn't realize. It must be considered that, today,
hardware cost is not the %90 of budget that it used to be. Throwing hardware
at the system can be as much expensive as throwing certified "it stuff".
(just think in coffee budget! :-) )

If you need to improve "user perception", you can do others things. Like
caching a table in your client  (with a trigger for any change on table X
updating a table called "timestamp_table_change" and a small select to this
table, you can easily know when you must update your client). If it is a
application server, serving http request, then "user perception" will be
sticked to bandwidth AND application server (some of them have cache for
request).

FYI, I don't recall a mechanism in MSSQL to cache a table in buffers. Oracle
has some structures to allow that. (you know) It uses his own buffer. Since
version 9i, you can set three different data buffers, one (recycled cache)
for low usage tables (I mean tables with blocks which don't have too much
chance to be queried again, like a very large historical table) , one for
high usage tables (keep cache), and the regular one (difference is in
algorithm). And you must also set a buffer cache size for tablespaces with
different block size. But there is no such thing as "create table x keep
entirenly in buffer". And above all things, oracle doc always states "first,
tune design, then tune queries, then start tunning engine".

greetings.




---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to majordomo@postgresql.org so that your
       message can get through to the mailing list cleanly


pgsql-performance by date:

Previous
From: Ian Westmacott
Date:
Subject: wal_buffers
Next
From: "Merlin Moncure"
Date:
Subject: Re: Indexes on ramdisk