Re: Adding skip scan (including MDAM style range skip scan) to nbtree - Mailing list pgsql-hackers
From | Alena Rybakina |
---|---|
Subject | Re: Adding skip scan (including MDAM style range skip scan) to nbtree |
Date | |
Msg-id | 32ad7d48-cecf-48b5-9f9b-49380377ffd2@postgrespro.ru Whole thread Raw |
In response to | Re: Adding skip scan (including MDAM style range skip scan) to nbtree (Peter Geoghegan <pg@bowt.ie>) |
Responses |
Re: Adding skip scan (including MDAM style range skip scan) to nbtree
|
List | pgsql-hackers |
Hi!
On Sat, Mar 22, 2025 at 1:47 PM Peter Geoghegan <pg@bowt.ie> wrote:Attached is v30, which fully replaces the pstate.prechecked optimization with the new _bt_skip_ikeyprefix optimization (which now appears in v30-0002-Lower-nbtree-skip-array-maintenance-overhead.patch, and not in 0003-*, due to my committing the primscan scheduling patch just now).Attached is v31, which has a much-improved _bt_skip_ikeyprefix (which I've once again renamed, this time to _bt_set_startikey).
I reviewed your first patch and noticed that you added the ability to define new quals if the first column isn't used in the query.
I replied an example like this:
CREATE TABLE sales ( id SERIAL PRIMARY KEY, region TEXT, product TEXT, year INT );
INSERT INTO sales (region, product, year) SELECT CASE WHEN i % 4 <= 1 THEN 'North' WHEN i % 4 <= 2 THEN 'South' WHEN i % 4 <= 3 THEN 'East' ELSE 'West' END, CASE WHEN random() > 0.5 THEN 'Laptop' ELSE 'Phone' END, 2023 FROM generate_series(1, 100000) AS i;
vacuum analyze;
CREATE INDEX sales_idx ON sales (region, product, year);
EXPLAIN ANALYZE SELECT * FROM sales WHERE product = 'Laptop' AND year = 2023;
master gives the query plan with SeqScan:
QUERY PLAN --------------------------------------------------------------------------------------------------------------- Seq Scan on sales (cost=0.00..2137.00 rows=49703 width=19) (actual time=0.035..31.438 rows=50212.00 loops=1) Filter: ((product = 'Laptop'::text) AND (year = 2023)) Rows Removed by Filter: 49788 Buffers: shared hit=637 Planning: Buffers: shared hit=35 Planning Time: 0.695 ms Execution Time: 33.942 ms (8 rows)
Your patch sets fair costs for it and it helps take into account index scan path and in my opinion it is perfect!)
QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on sales (cost=652.46..2031.86 rows=49493 width=19) (actual time=18.039..33.723 rows=49767.00 loops=1) Recheck Cond: ((product = 'Laptop'::text) AND (year = 2023)) Heap Blocks: exact=637 Buffers: shared hit=642 read=50 -> Bitmap Index Scan on sales_idx (cost=0.00..640.09 rows=49493 width=0) (actual time=17.756..17.756 rows=49767.00 loops=1) Index Cond: ((product = 'Laptop'::text) AND (year = 2023)) Index Searches: 4 Buffers: shared hit=5 read=50 Planning: Buffers: shared hit=55 read=1 Planning Time: 0.984 ms Execution Time: 36.940 ms
I think it would be useful to show information that we used an index scan but at the same time we skipped the "region" column and I assume we should output how many distinct values the "region" column had.
For example it will look like this "Skip Scan on region (4 distinct values)":
QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on sales (cost=652.46..2031.86 rows=49493 width=19) (actual time=18.039..33.723 rows=49767.00 loops=1) Recheck Cond: ((product = 'Laptop'::text) AND (year = 2023)) Heap Blocks: exact=637 Buffers: shared hit=642 read=50 -> Bitmap Index Scan on sales_idx (cost=0.00..640.09 rows=49493 width=0) (actual time=17.756..17.756 rows=49767.00 loops=1) Index Cond: ((product = 'Laptop'::text) AND (year = 2023)) Skip Scan on region (4 distinct values) Index Searches: 4 Buffers: shared hit=5 read=50 Planning: Buffers: shared hit=55 read=1 Planning Time: 0.984 ms Execution Time: 36.940 ms
What do you think?
I didn't see any regression tests. Maybe we should add some tests? To be honest I didn't see it mentioned in the commit message but I might have missed something.
-- Regards, Alena Rybakina Postgres Professional
pgsql-hackers by date: