" -> Index Scan using assets_floatvalue_idx on assets (cost=0.43..2714072.57 rows=867607 width=53) (actual time=0.089..0.230 rows=7 loops=1)"
" Index Cond: (((_doc #> '{floatValue}'::text[]))::double precision < '3'::double precision)"
"Planning Time: 1.617 ms"
"Execution Time: 0.276 ms"
313 is a pretty high estimate for fetching an estimated 100 rows. It must think that nearly every row fetched from the table is going to be a random page fetch. Which means that it must think the correlation between assets_floatvalue_idx and physical table order is close to 0.
SET enable_seqscan = on;
explain analyze select id, _doc->>'floatValue' from assets where (_doc #> '{floatValue}'::text[])::double precision < 3.0 limit 100;
It thinks it will find 867607 rows which meet the <3.0 condition, but really it only finds 7. It has to scan the full table, because with only 7 rows it can never stop early due to the LIMIT 100. Why is the estimate wrong by a factor of over 100,000? It should be using the statistics from the expression index here (even though it is not using the index during execution), and so should have pretty good statistics.
Can you show the output of:
select * from pg_stats where tablename ='float_number_index_path2'
For readability, use the output format which shows the columns down the screen, not across. In psql, that would toggled on with \x.