Hello,
I have a table (transactions) containing 61 414 503 rows. The basic
count query (select count(transid) from transactions) takes 138226
milliseconds.
This is the query analysis output:
Aggregate (cost=2523970.79..2523970.80 rows=1 width=8) (actual
time=268964.088..268964.090 rows=1 loops=1);
-> Seq Scan on transactions (cost=0.00..2370433.43 rows=61414943
width=8) (actual time=13.886..151776.860 rows=61414503 loops=1);
Total runtime: 268973.248 ms;
Query has several indexes defined, including one on transid column:
non-unique;index-qualifier;index-name;type;ordinal-position;column-name;asc-or-desc;cardinality;pages;filter-condition
f;<null>;transactions_id_key;3;1;transid;<null>;61414488;168877;<null>;
t;<null>;trans_ip_address_index;3;1;ip_address;<null>;61414488;168598;<null>;
t;<null>;trans_member_id_index;3;1;member_id;<null>;61414488;169058;<null>;
t;<null>;trans_payment_id_index;3;1;payment_id;<null>;61414488;168998;<null>;
t;<null>;trans_status_index;3;1;status;<null>;61414488;169005;<null>;
t;<null>;transactions__time_idx;3;1;time;<null>;61414488;168877;<null>;
t;<null>;transactions_offer_id_idx;3;1;offer_id;<null>;61414488;169017;<null>;
I'm not a dba so I'm not sure if the time it takes to execute this query
is OK or not, it just seems a bit long to me.
I'd appreciate it if someone could share his/her thoughts on this. Is
there a way to make this table/query perform better?
Any query I'm running that joins with transactions table takes forever
to complete, but maybe this is normal for a table this size.
Regards,
Marcin