Thread: general table stats, ideas ?
Hey folks,
I want to do a fairly simple thing, but so far I see no way in which this could be implemented with postgresql. So I decided to ask folks here.
My problem is, I need to benchmark set of tables, where - we can assume - schema of each table is unknown, and we have no assumption on any fields being present there. (altho, if there is no other way to do it, we could assume id bigint not null default nextval('someseq'));
basically, I need to know when certain row was selected (read), and when it was updated/created (insert). For that I need two fields. And although former could be done, and I more or less know how to do it (I think it can be done with rule, and default = now()) - I have yet to find a way on how to do the same thing for select.
If someone loves a challenge, and is able to provide me with an answer to my problem - please do so. Also, if this can be done, but with slightly different requirements - please let me know too.
Thanks, this is my first post here - so welcome you guys
--
GJ
--
GJ
On Tue, Oct 7, 2008 at 9:47 AM, Grzegorz Jaśkiewicz <gryzman@gmail.com> wrote: > If someone loves a challenge, and is able to provide me with an answer to my > problem - please do so. Also, if this can be done, but with slightly > different requirements - please let me know too. > Thanks, this is my first post here - so welcome you guys Would the facilities already in the PostgreSQL logging system not work for you? -- Regards, Richard Broersma Jr. Visit the Los Angeles PostgreSQL Users Group (LAPUG) http://pugs.postgresql.org/lapug
2008/10/7 Richard Broersma <richard.broersma@gmail.com>
On Tue, Oct 7, 2008 at 9:47 AM, Grzegorz Jaśkiewicz <gryzman@gmail.com> wrote:Would the facilities already in the PostgreSQL logging system not work for you?
> If someone loves a challenge, and is able to provide me with an answer to my
> problem - please do so. Also, if this can be done, but with slightly
> different requirements - please let me know too.
> Thanks, this is my first post here - so welcome you guys
at the end of a day, maybe - but we don't log by default, because quite few tables operate on bytea structures, that are failry large, and log file's size would be substantial.
--
GJ
> My problem is, I need to benchmark set of tables, where - we can > assume - schema of each table is unknown, and we have no assumption on > any fields being present there. (altho, if there is no other way to do > it, we could assume id bigint not null default nextval('someseq')); > basically, I need to know when certain row was selected (read), and > when it was updated/created (insert). For that I need two fields. And > although former could be done, and I more or less know how to do it (I > think it can be done with rule, and default = now()) - I have yet to > find a way on how to do the same thing for select. So basically you've got a slony replicated database that you want to check (on a row by row level) when something gets read/inserted/deleted? It seems like you would want to add three fields to each table: last_read_time, last_update_time, and original_insert_time Then you restrict all access to the table and use security definer functions to allow access. These functions would also update said metadata to the table. If you need an ongoing log of access to the tables, you could always add an accessor log table that looked like: User (postgres/MYUSER) / Action (Select/Update/Insert) / Column (some_column_name) / New Value (blah-value) Erm, that's if I understand your question right. :-/ -Mark
On Tue, Oct 7, 2008 at 11:27 AM, Mark Roberts <mailing_lists@pandapocket.com> wrote: > > >> My problem is, I need to benchmark set of tables, where - we can >> assume - schema of each table is unknown, and we have no assumption on >> any fields being present there. (altho, if there is no other way to do >> it, we could assume id bigint not null default nextval('someseq')); >> basically, I need to know when certain row was selected (read), and >> when it was updated/created (insert). For that I need two fields. And >> although former could be done, and I more or less know how to do it (I >> think it can be done with rule, and default = now()) - I have yet to >> find a way on how to do the same thing for select. > > So basically you've got a slony replicated database that you want to > check (on a row by row level) when something gets read/inserted/deleted? > > It seems like you would want to add three fields to each table: > last_read_time, last_update_time, and original_insert_time If you need to keep track of all updates, past and present, then a logging table of some kind would be needed, and a trigger to update it possibly. Or some kind of time travel setup.