Andrus,
You might consider something like materialized views:
http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html
Whether table caching is a good idea depends completely on the
demands of your application.
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source: Open Your i™
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)
On Aug 14, 2005, at 1:12 PM, Andrus Moor wrote:
> To increase performance, I'm thinking about storing copies of less
> frequently changed tables in a client computer.
> At startup client application compares last change times and
> downloads newer
> tables from server.
>
> CREATE TABLE lastchange (
> tablename CHAR(8) PRIMARY KEY,
> lastchange timestamp without time zone );
>
> INSERT INTO lastupdated (tablename) values ('mytable1');
> ....
> INSERT INTO lastupdated (tablename) values ('mytablen');
>
> CREATE OR REPLACE FUNCTION setlastchange() RETURNS "trigger"
> AS $$BEGIN
> UPDATE lastchange SET lastchange='now' WHERE tablename=TG_RELNAME;
> RETURN NULL;
> END$$ LANGUAGE plpgsql STRICT;
>
> CREATE TRIGGER mytable1_trig BEFORE INSERT OR UPDATE OR DELETE ON
> mytable1
> EXECUTE PROCEDURE setlastchange();
> ....
> CREATE TRIGGER mytablen_trig BEFORE INSERT OR UPDATE OR DELETE ON
> mytablen
> EXECUTE PROCEDURE setlastchange();
>
> Is table caching good idea?
> Is this best way to implement table caching ?
>
> Andrus.