Hi everyone,
I have a sql request which on first invocation completes in ~12sec but then
drops to ~3sec on the following runs. The 3 seconds would be acceptable but
how can I make sure that the data is cached and all times? Is it simply
enough to set shared_buffers high enough to hold the entire database (and
have enough ram installed of course)? The OS is linux in this case.
Nested Loop (cost=0.00..11.44 rows=1 width=362) (actual
time=247.83..12643.96 rows=14700 loops=1)
-> Index Scan using suchec_testa on suchec (cost=0.00..6.02 rows=1
width=23) (actual time=69.91..902.68 rows=42223 loops=1)
-> Index Scan using idx_dokument on dokument d (cost=0.00..5.41 rows=1
width=339) (actual time=0.26..0.26 rows=0 loops=42223)
Total runtime: 12662.64 msec
Nested Loop (cost=0.00..11.44 rows=1 width=362) (actual time=1.18..2829.79
rows=14700 loops=1)
-> Index Scan using suchec_testa on suchec (cost=0.00..6.02 rows=1
width=23) (actual time=0.51..661.75 rows=42223 loops=1)
-> Index Scan using idx_dokument on dokument d (cost=0.00..5.41 rows=1
width=339) (actual time=0.04..0.04 rows=0 loops=42223)
Total runtime: 2846.63 msec