Thread: Create index hanging
Hi, We have a couple of tables (40,000 and 600,000 rows each) that I am trying to build an index on an integer column. When I issue the "create index foo_index on foo(id);" command in psql it just does not return. I've waited over half an hour for the smaller table. The same command worked with no problems on a test environment with roughly the same data. Any ideas what may be going on? When I use the "top" command, I don't see postmaster as working during this time. So I suspect that the create index has gone in some sort of a wait mode. Will appreciate any suggestions. Thanks Claire -- Claire McLister mclister@zeesource.net 1684 Nightingale Avenue Suite 201 Sunnyvale, CA 94087 408-733-2737(fax) http://www.zeemaps.com
Thanks. Haven't VACUUMed ever. (Sloppy housekeeping:-) Will do so and let you know if it helped. There seems to have been some other problem too. I had to just now forcefully shut down the postmaster and re-start it as it was beginning to block on other requests as well. If I Ctl-C the PSQL command for create index, it shouldn't cause a problem, right? On Jul 20, 2006, at 1:48 PM, Will Glynn wrote: > On Jul 20, 2006, at 4:11 PM, Claire McLister wrote: > >> Hi, >> >> We have a couple of tables (40,000 and 600,000 rows each) that I >> am trying to build an index on an integer column. >> >> When I issue the "create index foo_index on foo(id);" command in >> psql it just does not return. I've waited over half an hour for >> the smaller table. >> >> The same command worked with no problems on a test environment >> with roughly the same data. >> >> Any ideas what may be going on? When I use the "top" command, I >> don't see postmaster as working during this time. So I suspect >> that the create index has gone in some sort of a wait mode. >> >> Will appreciate any suggestions. >> >> Thanks >> >> Claire > > Have you VACUUMed recently? I've seen this happen when there's a > ridiculous number of dead rows in the table. > > --Will Glynn > Freedom Health Systems > >
No, this is a brand new index, so no drops. On Jul 20, 2006, at 2:03 PM, Ian Harding wrote: > On 7/20/06, Claire McLister <mclister@zeesource.net> wrote: >> Hi, >> >> We have a couple of tables (40,000 and 600,000 rows each) that I am >> trying to build an index on an integer column. >> >> When I issue the "create index foo_index on foo(id);" command in >> psql it just does not return. I've waited over half an hour for the >> smaller table. >> >> The same command worked with no problems on a test environment with >> roughly the same data. >> >> Any ideas what may be going on? > > I thought maybe locks, but my brief experimentation shows that locks > don't seem to block index creation, but they do seem to block dropping > one. Hmmm. Are you trying to drop it first?
Claire McLister <mclister@zeesource.net> writes: > We have a couple of tables (40,000 and 600,000 rows each) that I am > trying to build an index on an integer column. > When I issue the "create index foo_index on foo(id);" command in > psql it just does not return. I've waited over half an hour for the > smaller table. It sounds like someone had a write lock on the table. Did you try looking in pg_locks? regards, tom lane
Yes, that could be the case. We have a python function that imports CSV files, which can take a long time, and that may have been running during that time. I didn't look at the pg_lock file. What should I be looking for? On Jul 20, 2006, at 3:13 PM, Tom Lane wrote: > Claire McLister <mclister@zeesource.net> writes: >> We have a couple of tables (40,000 and 600,000 rows each) that I am >> trying to build an index on an integer column. > >> When I issue the "create index foo_index on foo(id);" command in >> psql it just does not return. I've waited over half an hour for the >> smaller table. > > It sounds like someone had a write lock on the table. Did you try > looking in pg_locks? > > regards, tom lane > > ---------------------------(end of > broadcast)--------------------------- > TIP 2: Don't 'kill -9' the postmaster
Claire McLister <mclister@zeesource.net> writes: > Yes, that could be the case. We have a python function that imports > CSV files, which can take a long time, and that may have been running > during that time. I didn't look at the pg_lock file. What should I be > looking for? A record with granted = false for the stuck process (joining pid to pg_stat_activity will help you determine which process goes with each record, or look in "ps" output). If you find one, look for a record for the same lock with granted = true and a conflicting lock type; that tells you which process is blocking the lock. regards, tom lane
On Fri, 2006-07-21 at 14:17, Claire McLister wrote: > Yes, that could be the case. We have a python function that imports > CSV files, which can take a long time, and that may have been running > during that time. I didn't look at the pg_lock file. What should I be > looking for? I have the following in ~/.psqlrc: ---------- snip here --------------------- prepare locks(bigint) as select c.relname, l.* from pg_locks l left outer join pg_class c on c.oid=l.relation where pid=$1 union all select c.relname, l.* from pg_locks l left outer join pg_class c on c.oid=l.relation where l.pid in (select ml.pid from pg_locks ml, pg_locks cl where cl.pid=$1 and not cl.granted and cl.transaction = ml.transaction and ml.mode = 'ExclusiveLock'); \set lck 'execute locks' \set ps 'SELECT procpid, substring(current_query for 97), to_char((now()-query_start), \'HH24:MI:SS\') as t FROM pg_stat_activity where current_query not like \'%<insufficient%\' and current_query not like \'%IDLE%\' order by t desc;' ---------- snip here --------------------- Then use: dbprompt=> :lck(pid); where "pid" is the process id of the backend of your blocking query. That's also easy to find out if you enable command strings in the config file (it won't work without that, i.e. you will see the backends but not the queries, and then it's useless for your purpose), and use the :ps defined above, which is optimized for my terminal's width, so you could change the line truncation size (set to 97 in my case) to fit yours. HTH, Csaba.
Thanks. I'll look into this next time it happens. For now, I tried the create index commands again, and they worked in a split-second. So, it must have been the lock contention, and not the VACUUM'ng. On Jul 21, 2006, at 6:45 AM, Tom Lane wrote: > Claire McLister <mclister@zeesource.net> writes: >> Yes, that could be the case. We have a python function that imports >> CSV files, which can take a long time, and that may have been running >> during that time. I didn't look at the pg_lock file. What should I be >> looking for? > > A record with granted = false for the stuck process (joining pid to > pg_stat_activity will help you determine which process goes with each > record, or look in "ps" output). If you find one, look for a record > for the same lock with granted = true and a conflicting lock type; > that > tells you which process is blocking the lock. > > regards, tom lane > > ---------------------------(end of > broadcast)--------------------------- > TIP 5: don't forget to increase your free space map settings