Hello all
I have a big amount of phone calls data (280M records, 25 Gbytes).The best decision
for this task is partitioning and I use it now. But at first I tried put all
data in a single table indexed by call date&time. Because of nature of the
data the records clustered by date and near ordered by time.
The table definition:
CREATE DOMAIN datetime AS timestamp NOT NULL;
CREATE DOMAIN cause AS int2 DEFAULT 16 NOT NULL;
CREATE DOMAIN conn_num AS varchar(34);
CREATE DOMAIN dur AS int4 NOT NULL;
CREATE DOMAIN lno AS int2;
CREATE DOMAIN tgrp AS char(6);
CREATE TABLE conn
(
datetime datetime,
anum conn_num,
bnum conn_num,
dur dur,
itgrp tgrp,
ilno lno,
otgrp tgrp,
olno lno,
cause cause
)
WITHOUT OIDS;
CREATE INDEX conn_dt
ON conn
USING btree
(datetime);
Usual tasks on the table are export and search calls on one or more days. This
cause the scan of 400K or more records, selected by 'conn_dt' index. The best data
access path is a bitmap heap scan. Tests I've made showed incredible bitmap scan
perfomance almost equal to a seq scan. But PG always prefered simple index scan
which is 20 times slower. Digging in the PG internals brought me to
indexCorrelation. For the 'datetime' column it was about 0,999999. But why despite
of this the index scan was so slow? In the next step I ran
select ctid from conn where ... order by datetime;
Result showed up that there were no page seq scan at all - true random access
only.
The simple model which can explain the situation: the sequence of numbers 2, 1,
4, 3, 6, 5, ..., 100, 99 has correlation about 0,9994. Let's imagine it's the page
order of an index scan. H'm, bad choice, isn't it?
I think indexCorrelation can help to estimate page count but not page
fetch cost. Why not to use formula
min_IO_cost = ceil(indexSelectivity * T) * random_page_cost
instead of
min_IO_cost = ceil(indexSelectivity * T) ?