Re: very slow queries and ineffective vacuum - Mailing list pgsql-general
From | Pavel Stehule |
---|---|
Subject | Re: very slow queries and ineffective vacuum |
Date | |
Msg-id | CAFj8pRBbQEk8vhrVaJVDHGseKCwyUJ3Oh8VcBkWw8wu=Jp4X=g@mail.gmail.com Whole thread Raw |
In response to | very slow queries and ineffective vacuum (Lukasz Wrobel <lukasz.wrobel@motorolasolutions.com>) |
Responses |
Re: very slow queries and ineffective vacuum
|
List | pgsql-general |
Hi
What is an output of VACUUM VERBOSE statement?
VACUUM can be blocked by some forgotten transaction. Check your pg_stat_activity table for some old process in "idle in transaction" state. Then connection should not be reused, and you can see a error messages about missing connections. I found this issue more time in Java application - when it doesn't handle transactions correctly. Same effect can have forgotten 2PC transaction.
When VACUUM long time was not executed - the most fast repair process is a export via pg_dump and load. Another way is dropping all indexes, VACUUM FULL and creating fresh indexes.
Autovacuum is based on tracking statistics - you have to see your tables in table pg_stat_user_tables, and you can check there autovacuum timestamp. Sometimes autovacuum has too low priority and it is often cancelled.
Regards
Pavel Stehule
2015-06-30 14:57 GMT+02:00 Lukasz Wrobel <lukasz.wrobel@motorolasolutions.com>:
Hello.I have multiple problems with my database, the biggest of which is how to find out what is actually wrong.First of all I have a 9.3 postgres database that is running for about a month. Right now the queries on that database are running very slowly (select with a simple "where" on a non-indexed column on a table with about 5000 records takes 1,5s, a complicated hibernate select with 7 joins on tables of about 5000 records takes about 15s, insert or update on a table with 35000 records takes up to 20 mins).The tables and indexes on those tables are bloated to the point where this query: https://wiki.postgresql.org/wiki/Show_database_bloat shows wasted bytes in hundreds of MB.For whatever reason there is also no data in pg_stat* tables.So due to the long query times, there are multiple errors in my application logs like "No free connection available" or "Could not synchronize database state with session", or "Failed to rollback transaction" and the application fails to start in the required time.The only thing that helps fix the situation seems to be vacuum full of the entire database. Regular vacuum doesn't even lower the dead tuples count (which appear by the thousands during application launching). Reindex of all the indexes in the database didn't help as well. All autovacuum parameters are default.There doesn't seem to be any issues with disk space, memory or CPU, as neither of those is even 50% used (as per df and top).Is there any good tool that will monitor the queries and generate a report with useful information on what might be the problem? I tried pg_badger, but all I got were specific queries and their times, but the long query times are just one of the symptoms of what's wrong with the database, not the cause.Perhaps I'm missing some indexes on the tables (creating them on the columns on which the where clause was used in the long queries seemed to halve their times). Also how can I monitor my transactions and if they are closed properly?I will be grateful for any help and if you need more details I can provide them if possible.Best regards.Lukasz
pgsql-general by date: