Hi,
On Fri, Jan 10, 2020 at 02:13:47PM -0700, maschka, sam wrote:
>We have been seeing an issue where reads on a table seem to slowly degrade
>over time. When we run VACUUM ANALYZE on the table the operations perform
>much better. It usually takes about a week or two for the reads to degrade
>to a point where we have to manually run VACUUM ANALYZE. This has been
>happening to a handful of our tables. The mutation operations ran against
>these tables are mainly update operations. Very few insert and delete
>operations are ran against these tables. Also a jsonb object stores most of
>the data within each record. The size of the jsonb object can vary widely
>between records. We thought the autovacuum facility would have prevented us
>from having to periodically run VACUUM ANALYZE. Is this a bug or is it a
>configuration issue on our tables?
>
Unfortunately that's impossible to answer without providing much more
detailed information about the issue. This could easily be just a matter
of autovacuum not being aggressive enough on those tables, for example.
I don't think this qualifies as a bug, certainly not in the current
shape. Perhaps we'll discover something, but if I had to guess at this
point, I'd say it's likely a configuration issue.
I suggest you post a more detailed report to pgsql-performance [1] with
much more details. Right of the bat we need to know
- PostgreSQL version
- size of the database
- non-default configuration
- what type of queries you're running
- how much slower it gets (2x, 10x, 100x?)
- does autovacuum run at all on the tables?
- does the execution plan change?
- if the execution plan does not change, maybe try some CPU profiling
There's also [2] which is primarily focused on slow query analysis, but
maybe it'll give you ideas what other information to provide.
regards
[1] https://www.postgresql.org/list/pgsql-performance/
[2] https://wiki.postgresql.org/wiki/Slow_Query_Questions
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services