On Thursday 15 April 2004 17:19, Rajesh Kumar Mallah wrote:
> Bill Moran wrote:
> > Rajesh Kumar Mallah wrote:
> >> Hi,
> >>
> >> The problem was solved by reloading the Table.
> >> the query now takes only 3 seconds. But that is
> >> not a solution.
> >
> > If dropping/recreating the table improves things, then we can reasonably
> > assume that the table is pretty active with updates/inserts. Correct?
>
> Yes the table results from an import process and under goes lots
> of inserts and updates , but thats before the vacuum full operation.
> the table is not accessed during vacuum. What i want to know is
> is there any wat to automate the dumping and reload of a table
> individually. will the below be safe and effective:
Shouldn't be necessary assuming you vacuum (not full) regularly. However,
looking back at your original posting, the vacuum output doesn't seem to show
any rows that need removing.
# VACUUM full verbose eyp_rfi;
INFO: vacuuming "public.eyp_rfi"
INFO: "eyp_rfi": found 0 removable, 505960 nonremovable row versions in
71987 pages
DETAIL: 0 dead row versions cannot be removed yet.
Since your select count(*) showed 505960 rows, I can't see how
dropping/replacing could make a difference on a sequential scan. Since we're
not using any indexes I don't see how it could be related to that.
> begin work;
> create table new_tab AS select * from tab;
> truncate table tab;
> insert into tab select * from new_tab;
> drop table new_tab;
> commit;
> analyze tab;
>
> i havenot tried it but plan to do so.
> but i feel insert would take ages to update
> the indexes if any.
It will have to update them, which will take time.
> BTW
>
> is there any way to disable checks and triggers on
> a table temporarily while loading data (is updating
> reltriggers in pg_class safe?)
You can take a look at pg_restore and copy how it does it.
--
Richard Huxton
Archonet Ltd