Hi communities,
I am investigating a performance issue involved with LIKE 'xxxx%' on an index in a complex query with joins.
The problem boils down into this simple scenario---:
====Scenario====
My database locale is C, using UTF-8 encoding. I tested this on 9.1.6 and 9.2.1.
Q1.
SELECT * FROM shipments WHERE shipment_id LIKE '12345678%'
Q2.
SELECT * FROM shipments WHERE shipment_id >= '12345678' AND shipment_id < '12345679'
shipments is a table with million rows and 20 columns. Shipment_id is the primary key with text and non-null field.
CREATE TABLE cod.shipments
(
shipment_id text NOT NULL,
-- other columns omitted
CONSTRAINT shipments_pkey PRIMARY KEY (shipment_id)
)
Analyze Q1 gives this:
Index Scan using shipments_pkey on shipments (cost=0.00..39.84 rows=1450 width=294) (actual time=0.018..0.018 rows=1 loops=1)
Index Cond: ((shipment_id >= '12345678'::text) AND (shipment_id < '12345679'::text))
Filter: (shipment_id ~~ '12345678%'::text)
Buffers: shared hit=4
Analyze Q2 gives this:
Index Scan using shipments_pkey on shipments (cost=0.00..39.83 rows=1 width=294) (actual time=0.027..0.027 rows=1 loops=1)
Index Cond: ((shipment_id >= '12345678'::text) AND (shipment_id < '12345679'::text))
Buffers: shared hit=4
====Problem Description====
In Q1, the planner thought there will be 1450 rows, and Q2 gave a much better estimate of 1.
The problem is when I combine such condition with a join to other table, postgres will prefer a merge join (or hash) rather than a nested loop.
====Question====
Is Q1 and Q2 equivalent? From what I see and the result they seems to be the same, or did I miss something? (Charset: C, Encoding: UTF-8) If they are equivalent, is that a bug of the planner?
Many Thanks,
Sam
(The email didn’t seems to go through without subscription. Resending)