Baron Swartz's recent post [1] on working set size got me to thinking.
I'm well aware of how I can tell when my database's working set
exceeds available memory (cache hit rate plummets, performance
collapses), but it's less clear how I could predict when this might
occur.
Baron's proposed method for defining working set size is interesting. Quoth:
> Quantifying the working set size is probably best done as a percentile over time.
> We can define the 1-hour 99th percentile working set size as the portion of the data
> to which 99% of the accesses are made over an hour, for example.
I'm not sure whether it would be possible to calculate that today in
Postgres. Does anyone have any advice?
Best regards,
Peter
[1]: http://www.fusionio.com/blog/will-fusionio-make-my-database-faster-percona-guest-blog/
--
Peter van Hardenberg
San Francisco, California
"Everything was beautiful, and nothing hurt." -- Kurt Vonnegut