This is great. So as a general rule of thumb, if I see a Join Filter removing an excessive number of rows, I can check if that condition can be added to an index from the same table which is already being scanned.
But taking a step back, it seems like the core problem in your explain output is here: >> -> Nested Loop (cost=1.29..83263.71 rows=1 width=24) (actual time=0.196..23799.930 rows=53595 loops=1) >> Join Filter: (o.po_id = p.po_id) >> Rows Removed by Join Filter: 23006061 >> Buffers: shared hit=23217993 dirtied=1
That's an awful lot of rows being formed by the join only to be rejected. You should try creating an index on branch_purchase_order_products(po_id, product_code) so that the po_id condition could be enforced at the inner indexscan instead of the join.