Thread: Why queries takes too much time to execute?
Hi.
I´m new here and i´m not sure if this is the right email to solve my problem.
Well, i have a very large database, with vary tables and very registers. Every day, too many operations are perfomed in that DB, with queries that insert, delete and update. Once a week some statistics are collected using vacuum analyze.
The problem is after a period of time (one month, i think), the queries takes too much time to perform. A simple update can take 10 seconds or more to perform.
If i make a backup, drop and recreate the DB, everything goes back normal.
Could anyone give me any guidance?
Anderson Boechat Lopes wrote: > Hi. > > I´m new here and i´m not sure if this is the right email to solve my > problem. > > Well, i have a very large database, with vary tables and very > registers. Every day, too many operations are perfomed in that DB, with > queries that insert, delete and update. Once a week some statistics are > collected using vacuum analyze. i guess you need to run it much more frequently than that. Thought you haven't given actual size of data etc., once or twice per day should be much better. > > The problem is after a period of time (one month, i think), the > queries takes too much time to perform. A simple update can take 10 > seconds or more to perform. You need to vacuum full once in a while and setup FSM parameters correctly. > > If i make a backup, drop and recreate the DB, everything goes back > normal. > > Could anyone give me any guidance? Check following for basic performance tuning http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html HTH Shridhar
> Well, i have a very large database, with vary tables and very > registers. Every day, too many operations are perfomed in that DB, with > queries that insert, delete and update. Once a week some statistics are > collected using vacuum analyze. Have vacuum analyze running once an HOUR if it's very busy. If you are using 7.4, run the pg_autovacuum daemon that's in contrib/pg_autovacuum. > The problem is after a period of time (one month, i think), the > queries takes too much time to perform. A simple update can take 10 > seconds or more to perform. If running vacuum analyze once an hour doesn't help, try running a vacuum full once a week or something to fix the problem. Also, try upgrading to 7.4 which has indexes that won't suffer from bloat. Chris
Hum... now i think i´m beginning to understand. The vacuum analyse is recommended to perform at least every day, after adding or deleting a large number of records, and not vacuum full analyse. I´ve performed the vacuum full analyse every day and after some time i´ve noticed the database was corrupted. I couldn´t select anything any more. Do you think if i perform vacuum analyse once a day and perform vacuum full analyse once a week, i will get to fix this problem? Thanks for help me, folks. PS: Sorry for my grammar mistakes. My writting is not so good. :) ----- Original Message ----- From: "Shridhar Daithankar" <shridhar@frodo.hserus.net> To: "Anderson Boechat Lopes" <teouique@terra.com.br> Cc: <pgsql-performance@postgresql.org> Sent: Monday, May 10, 2004 11:21 AM Subject: Re: [PERFORM] Why queries takes too much time to execute? > Anderson Boechat Lopes wrote: > > Hi. > > > > I´m new here and i´m not sure if this is the right email to solve my > > problem. > > > > Well, i have a very large database, with vary tables and very > > registers. Every day, too many operations are perfomed in that DB, with > > queries that insert, delete and update. Once a week some statistics are > > collected using vacuum analyze. > > i guess you need to run it much more frequently than that. Thought you haven't > given actual size of data etc., once or twice per day should be much better. > > > > The problem is after a period of time (one month, i think), the > > queries takes too much time to perform. A simple update can take 10 > > seconds or more to perform. > > You need to vacuum full once in a while and setup FSM parameters correctly. > > > > If i make a backup, drop and recreate the DB, everything goes back > > normal. > > > > Could anyone give me any guidance? > > Check following for basic performance tuning > > http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html > http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html > > HTH > > Shridhar > >
teouique@terra.com.br ("Anderson Boechat Lopes") writes: > I´m new here and i´m not sure if this is the right email to > solve my problem. This should be OK... > Well, i have a very large database, with vary tables and very > registers. Every day, too many operations are perfomed in that DB, > with queries that insert, delete and update. Once a week some > statistics are collected using vacuum analyze. > > The problem is after a period of time (one month, i think), the > queries takes too much time to perform. A simple update can take 10 > seconds or more to perform. It seems fairly likely that two effects are coming in... -> The tables that are being updated have lots of dead tuples. -> The vacuums aren't doing much good because the number of dead tuples is so large that you blow out the FSM (Free Space Map), and thus they can't free up space. -> Another possibility is that if some tables shrink to small size, and build up to large size (we see this with the _rserv_log_1_ and _rserv_log_2_ tables used by the eRServ replication system), the statistics may need to be updated a LOT more often. You might want to consider running VACUUM a whole lot more often than once a week. If there is any regular time that the system isn't terribly busy, you might want to vacuum some or all tables at that time. pg_autovacuum might be helpful; it will automatically do vacuums on tables when they have been updated heavily. There may be more to your problem, but VACUUMing more would allow us to get rid of "too many dead tuples around" as a cause. -- "cbbrowne","@","acm.org" http://cbbrowne.com/info/x.html Would-be National Mottos: USA: "There oughta' be a law!"
On Mon, 10 May 2004, Anderson Boechat Lopes wrote: > Hum... now i think i´m beginning to understand. > > The vacuum analyse is recommended to perform at least every day, after > adding or deleting a large number of records, and not vacuum full analyse. > I´ve performed the vacuum full analyse every day and after some time i´ve > noticed the database was corrupted. I couldn´t select anything any more. Hold it right there, full stop. If you've got corruption you've either found a rare corner case in postgresql (unlikely, corruption is not usually a big problem for postgresql) OR you have bad hardware. Test your RAM, CPUs, and hard drives before going any further. Data corruption, 99% of the time, is not the fault of postgresql but the fault of the hardware.