Thread: about postgres performance issue

about postgres performance issue

From
jing han
Date:
Hello,

We have a database with around 20000 records. Most of
the time we need to update records in two tables.
These two tables have around 6000 records. When we
update them, records in one of the table just get
updated, some of the records in another table will be
deleted and new records will be added.

When we repeatedly update these two tables, (note: we
execute "vacuum --analyze" every time after we update
those two tables). we check our log file, the first
time took 8 seconds, the second time took 9 seconds,
the third time took 11 seconds, the fourth time took
15 seconds, the fifth time took 18 seconds, the sixth
time took 22 seconds and so on. The time we use to
update two tables increses.

We check the database file directory, found database
file size growed.

Then we use "vacuum full", or use a script to do
su -l postgres -c "pg_dump  -Fc -f bigNetManDB netman;
sleep 2; pg_restore -v  --clean -d netman  
bigNetManDB" 

to try to bring down the database size.

We still found that after we "vacuum full" database,
the first time to update those two tables took 10
seconds, seconds time took 13 seconds and so on, which
means we cannot get 8 seconds performance anymore.

Can you give me any idea to deal with the database
size growing problem?

Any help will be greatly appreciated.


jing





__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: about postgres performance issue

From
"Jeff Eckermann"
Date:
"jing han" <jing_han_66@yahoo.com> wrote in message 
news:20050728151813.46697.qmail@web53507.mail.yahoo.com...
> Hello,
>
> We have a database with around 20000 records. Most of
> the time we need to update records in two tables.
> These two tables have around 6000 records. When we
> update them, records in one of the table just get
> updated, some of the records in another table will be
> deleted and new records will be added.
>
> When we repeatedly update these two tables, (note: we
> execute "vacuum --analyze" every time after we update
> those two tables). we check our log file, the first
> time took 8 seconds, the second time took 9 seconds,
> the third time took 11 seconds, the fourth time took
> 15 seconds, the fifth time took 18 seconds, the sixth
> time took 22 seconds and so on. The time we use to
> update two tables increses.
>
> We check the database file directory, found database
> file size growed.
>
> Then we use "vacuum full", or use a script to do
> su -l postgres -c "pg_dump  -Fc -f bigNetManDB netman;
> sleep 2; pg_restore -v  --clean -d netman
> bigNetManDB"
>
> to try to bring down the database size.
>
> We still found that after we "vacuum full" database,
> the first time to update those two tables took 10
> seconds, seconds time took 13 seconds and so on, which
> means we cannot get 8 seconds performance anymore.
>
> Can you give me any idea to deal with the database
> size growing problem?

What version of PostgreSQL are you running?  If it is any version later than 
7.2.x (and it certainly ought to be), then you should be able to solve your 
problem with appropriate Free Space Map (FSM) settings, in combination with 
sufficiently frequent vacuuming.  What is "appropriate" and "sufficient" in 
your case depends on your local conditions.  Have a look at the manual for 
more information, and try reading the archives of the pgsql-performance list 
for relevant discussions.

>
> Any help will be greatly appreciated.
>
>
> jing
>
>
>
>
>
> __________________________________________________
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
>               http://archives.postgresql.org
>