Thread: Select max(primary_key) taking a long time
Postgres 8.4.1 CentOS 5.4 I am trying to do select max(primary_key) from some_table; The explain looks like: explain select max(primary_key) from some_table; QUERY PLAN ---------------------------------------------------------------------------- Result (cost=0.15..0.16 rows=1 width=0) InitPlan 1 (returns $0) -> Limit (cost=0.00..0.15 rows=1 width=8) -> Index Scan Backward using some_table_pkey on some_table (cost=0.00..161797059.16 rows=1086279613 width=8) Filter: (trans_id IS NOT NULL) I checked there wasn't a lock of any kind select waiting from pg_stat_activity; waiting --------- f f f f f f f (7 rows) IOwait on the machine is around 20% sar 10 5 Linux 2.6.18-128.el5 (trans05.afs) 05/21/2010 10:56:49 AM CPU %user %nice %system %iowait %steal %idle 10:56:59 AM all 5.90 0.00 2.04 20.67 0.00 71.39 10:57:09 AM all 5.90 0.00 1.99 23.36 0.00 68.75 10:57:19 AM all 5.87 0.00 2.10 22.56 0.00 69.47 10:57:29 AM all 5.84 0.00 2.09 23.56 0.00 68.51 10:57:39 AM all 6.30 0.00 2.23 21.53 0.00 69.94 Average: all 5.96 0.00 2.09 22.34 0.00 69.61 Any ideas why the select would be taking long.. It has gone on for minutes with no answer. I can just look at the value of the sequence for the primary key, but I am curious why something that usually is sub-second is taking so long..
Francisco Reyes <lists@stringsutils.com> writes: > I am trying to do > select max(primary_key) from some_table; Are there a whole lot of nulls in that column? regards, tom lane
Tom Lane writes: > Francisco Reyes <lists@stringsutils.com> writes: >> I am trying to do >> select max(primary_key) from some_table; > > Are there a whole lot of nulls in that column? Zero nulls. It is a primary key.
Francisco Reyes <lists@stringsutils.com> writes: > Tom Lane writes: >> Francisco Reyes <lists@stringsutils.com> writes: >>> I am trying to do >>> select max(primary_key) from some_table; >> Are there a whole lot of nulls in that column? > Zero nulls. It is a primary key. Huh. The proposed plan should have run in basically zero time then. You might try strace'ing the backend or attaching to it with gdb to see what it's doing. regards, tom lane
On Fri, May 21, 2010 at 1:48 PM, Francisco Reyes <lists@stringsutils.com> wrote: > Tom Lane writes: > >> Francisco Reyes <lists@stringsutils.com> writes: >>> >>> I am trying to do >>> select max(primary_key) from some_table; >> >> Are there a whole lot of nulls in that column? > > > Zero nulls. It is a primary key. do a big delete recently? any other open transactions? since you are i/o waiting, I'm guessing either hint bits or you have another transaction in play which established tons of rows that your query has to wade through... the remedy for the former is to simply eat it (one time penalty) or rebuild the table. for the latter you simply have to resolve the other transaction. how big is your table according to pg_relation_size()? merlin
Merlin Moncure writes: > do a big delete recently? any other open transactions? Some inserts were taking place. Roughly 2 to 5 million rows inside transactions. We were doing some ETL and each batch represented a file we were loading. We need to have the entire file or roll back so each file is done within a transaction. > the remedy for the former is to simply eat it (one time penalty) or > rebuild the table. for the latter you simply have to resolve the > other transaction. I think it is related to the inserts... after they were done everything was back to normal. > how big is your table according to pg_relation_size()? \dt+ is easier. :-) 116GB