Re: uh-oh - Mailing list pgsql-novice

From Tom Allison
Subject Re: uh-oh
Date
Msg-id 448D3F45.70602@tacocat.net
Whole thread Raw
In response to Re: uh-oh  (Philip Hallstrom <postgresql@philip.pjkh.com>)
Responses Re: uh-oh  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-novice
Philip Hallstrom wrote:
>> I think I screwed up.
>>
>> I was running something in the background to update a table based on
>> the jobs output (once every 1-10 seconds) and while that was running I
>> created an index on the same table.
>>
>> Now that index is not used according to explain plans.
>> It does show up when I type '\di'
>> But I can't DROP INDEX.
>>
>> I think I'm in some trouble but I don't know how much.
>
>
> Have you vacuum analyzed that table?  Maybe the statistics still think a
> table scan is the best option?
>

Is that a normal to run vacuum analyze on a table after building indexes?
I can give it a try, but I'm asking for "care and feeding" reasons.

I did run vacuum and analyze seperately with no affect.

Given 2.6 million rows and a cost of >80,000 pages I would have anticipated a
full table scan to be avoided.

I'll get back to it later.  I've had to learn how to dump/restore really quick
because somewhere the indexes were built with some "illegal" names and I
couldn't drop them.  The names where "public.email_address" instead of
"email_address" for a table in the public schema.  pgaccess is not my friend
anymore.

I'm not sure I did the dump/restore correctly. The man pages instructions didn't
match real life.

pg_dump -d email -c -f email.out
pg_restore -d email -f email.out

give all kinds of errors last night.  I'll have to make a little database and
test it until I get them right.

pgsql-novice by date:

Previous
From: Devrim GUNDUZ
Date:
Subject: Re: Easy SQL Question
Next
From: Tom Allison
Date:
Subject: Re: SQL comments