What you see there (i think) - it's a performance hit of random disk read for non-cached database.
Try increase a shared buffers to value when table and index could fit into, and redo queries few time until you see something like Buffers: shared hit=bigvalue read=0 and compare performance, it might change timing quite a lot.
Also, I recommend set track_io_timing=on in postgresql.conf and after it use explain (analyze, buffers, timing) to see check how much time database spent doing IO operations.
Also try perform vacuum analyze myevents; before testing because it seems that you have no up to date visibility map on the table.
However, even in fully cached case selecting 40% on the table rows almost always will be faster via sequential scan, so I don't expect miracles.
"People problems are solved with people. If people cannot solve the problem, try technology. People will then wish they'd listened at the first stage."