I'd recommend creating a table that holds the info on all your cached pages (the link to the large object, and all of the product #'s of the items used to generate the page) and then put an update/delete trigger on your main products table that does a DELETE FROM cache_toc WHERE products.product_id = cached_item
While OID's will work, you may run into problems when dumping/restoring your database (pgdump doesn't dump OID's by default) so if you can avoid them by using another field, you are probably better off. An OID is just an unsigned int4 anyway.
At 05:18 PM 4/7/00, Andrew Schmeder wrote:
Hi all... I just jumped on the postgres boat for a new project and I have a sort of 'design / theory' question.
I am building a web application and I want to store pieces of the output in the database as large objects so that I don't have to recalculate them (I am predicting a sizeable CPU overhead for generating the pages...). So, lets say I render page X which contains a list of 10 items that were queried out of a bunch of different tables. Now I take that rendering and store it in my cache table... this is fine, but if any of those 10 items are changed I want the cache entry to automatically expire so that the page is recalculated next time it is requested. In another case a page will be dependent on the table itself (such as the insertion of a new record...).
So far, I have learned that every row, table, etc in postgres can be associated to a globally unique object id (oid), correct? I tried to explore the system tables, pg_, etc but I could not find a table which actually just contained the oids, so I assume that this is hidden somewhere on a layer below the database... I want to know what the best method to implement this system will be... I am thinking triggers / store procedures? but where do I store them... on each table that may have a dependency?