>>> My list can contain 1 .. 100000 records and table contains 3000000
>>> records and is growing.
>>
>> Ah. No IN(), then ;)
>> Temp table + ANALYZE seems your only option...
>
> In 8.3 or 8.4 I think that IN() or temp table produce exactly the same
> result.
>
> Andrus.
Oh, I just thought about something, I don't remember in which version it
was added, but :
EXPLAIN ANALYZE SELECT sum(column1) FROM (VALUES ...a million integers...
) AS v
Postgres is perfectly happy with that ; it's either a bit slow (about 1
second) or very fast depending on how you view things...
Aggregate (cost=15000.00..15000.01 rows=1 width=4) (actual
time=1060.253..1060.253 rows=1 loops=1)
-> Values Scan on "*VALUES*" (cost=0.00..12500.00 rows=1000000 width=4)
(actual time=0.009..634.728 rows=1000000 loops=1)
Total runtime: 1091.420 ms
The most interesting thing, of course, is that the statistics are exact.
You can use VALUES like a table (Join, whatever).
Of course it's always slightly annoying to juggle around with result sets
and stuff them in comma-separated strings, but it works.
Here it knows there's few rows ===> nested loop
EXPLAIN SELECT a.* FROM annonces a JOIN (VALUES
(0),(1),(2),(3),(4),(5),(6),(7)) AS v ON (a.id=v.column1);
QUERY PLAN
----------------------------------------------------------------------------------------
Nested Loop (cost=0.00..66.73 rows=8 width=943)
-> Values Scan on "*VALUES*" (cost=0.00..0.10 rows=8 width=4)
-> Index Scan using annonces_pkey on annonces a (cost=0.00..8.32
rows=1 width=943)
Index Cond: (a.id = "*VALUES*".column1)
With a million values it goes hash of course, etc.