On 12/16/25 23:40, Matthias Leisi wrote:
> An application (which we can’t change) is accessing some Postgres table, and we would like to record when the rows in
thattable were last read (meaning: appeared in a SELECT result). The ultimate goal would be that we can „age out“ rows
whichhave not been accessed in a certain period of time.
Why?
Given the small size of the table, what is the gain expected?
Also is it assured that the reading of a row equals importance of a row?
I would expect any solution would impose more overhead then simply
leaving the rows alone.
>
> The table contains some ten thousand rows, five columns, and we already record created / last updated using triggers.
Almostall accesses will result in zero, one or very few records returned. Given the modest size of the table,
performanceconsiderations are not top priority.
>
> If we had full control over the application, we could eg use a function to select the records and then update some
„lastread“ column. But since we don’t control the application, that’s not an option. On the other hand, we have full
controlover the database, so we could put some other „object“ in lieu of the direct table.
>
> Any other ways this could be achieved?
>
> Thanks,
> Matthias
>
>
>
--
Adrian Klaver
adrian.klaver@aklaver.com