Joe Conway <mail@joeconway.com> writes:
> Tom Lane wrote:
>> Ah-hah, I've sussed it ... you didn't actually change the storage
>> representation. You wrote:
> Yeah, I came to the same conclusion this morning (update longdna set dna
> = dna || '';), but it still seems that the chunked table is very
> slightly faster than the substring on the externally stored column:
> dna=# explain analyze select pdna from dna where foffset > 6000000 and
> foffset < 6024000;
> QUERY PLAN
> ------------------------------------------------------------------------------------------------------------------
> Index Scan using foffset_idx on dna (cost=0.00..4.22 rows=14
> width=32) (actual time=0.07..0.16 rows=11 loops=1)
> Index Cond: ((foffset > 6000000) AND (foffset < 6024000))
> Total runtime: 0.25 msec
> (3 rows)
> dna=# explain analyze select substr(dna,6002000,20000) from longdna;
> QUERY PLAN
> ------------------------------------------------------------------------------------------------
> Seq Scan on longdna (cost=0.00..1.01 rows=1 width=32) (actual
> time=0.23..0.24 rows=1 loops=1)
> Total runtime: 0.29 msec
> (2 rows)
This isn't a totally fair comparison, though, since the second case is
actually doing the work of assembling the chunks into a single string,
while the first is not. Data-copying alone would probably account for
the difference.
I would expect that the two would come out to essentially the same cost
when fairly compared, since the dna table is nothing more nor less than
a hand implementation of the TOAST concept. The toaster's internal
fetching of toasted data segments ought to be equivalent to the above
indexscan. The toaster would have a considerable edge on Scott's
implementation when it came to assembling the chunks, since it's working
in C and not in plpgsql, but the table access costs ought to be just
about the same.
regards, tom lane