Ah, ok, so it has to decode all the fields until the one needed is reached, because only be decoding one can find out
thelength of individual fields.
Sorry for the noise. But please have a look at my previous report, which is the really interesting one.
-----Ursprüngliche Nachricht-----
Von: Tom Lane <tgl@sss.pgh.pa.us>
Gesendet: Sonntag, 19. November 2023 17:59
An: Daniel Migowski <dmigowski@ikoffice.de>
Cc: pgsql-bugs@lists.postgresql.org
Betreff: Re: BUG #18206: Strange performance behaviour depending on location of field in query.
PG Bug reporting form <noreply@postgresql.org> writes:
> I have a table with some columns id and a to z and these two queries
> have drastically different runtimes.
> select count(id) FROM testtable t WHERE t.z IS NULL; (35% slower
> than WHERE t.a IS NULL)
> select count(id) FROM testtable t WHERE t.a IS NULL;
> This just screems like somewhere PostgreSQL is iterating over fields
> over and over again without assigning field indexes to the literals.
This is entirely unsurprising. There's no cheap way to extract values from a row that contains nulls: the offset of
thefield you want can't be determined without iterating over all the fields before it, since some of them might not be
there.
One could imagine ways to optimize queries of this exact form: if the WHERE clause is "WHERE t.z IS [NOT] NULL" and
nothingelse, in principle it could be checked by examining z's bit in the nulls bitmap, without really extracting any
fieldvalues. But that'd require adding some remarkably ugly warts to the clause evaluation mechanism, and I doubt it
wouldbe worth the trouble.
regards, tom lane