On 7/01/2010 11:45 PM, Gurgel, Flavio wrote:
>>> The table is very wide, which is probably why the tested databases
>> can
>>> deal with it faster than PG. You could try and narrow the table
>> down
>>> (for instance: remove the Div* fields) to make the data more
>>> "relational-like". In real life, speedups in this circumstances
>> would
>>> probably be gained by normalizing the data to make the basic table
>>> smaller and easier to use with indexing.
>
> Ugh. I don't think so. That's why indexes were invented. PostgreSQL is smart enough to "jump" over columns using byte
offsets.
Even if Pg tried to do so, it would generally not help. The cost of a
disk seek to the start of the next row would be much greater than the
cost of continuing to sequentially read until that point was reached.
With the amazing sequential read speeds and still mediocre seek speeds
modern disks it's rarely worth seeking over unwanted data that's less
than a megabyte or two in size.
Anyway, in practice the OS-level, array-level and/or disk-level
readahead would generally ensure that the data you were trying to skip
had already been read or was in the process of being read.
Can Pg even read partial records ? I thought it all operated on a page
level, where if an index indicates that a particular value is present on
a page the whole page gets read in and all records on the page are
checked for the value of interest. No?
--
Craig Ringer