'Index Scan using idx_user_country on public.old_card (cost=0.57..1854.66 rows=460 width=922) (actual time=3.442..76.606 rows=200 loops=1)'
' Output: id, user_id, user_country, user_channel, user_role, created_by_system_key, created_by_username, created_at, last_modified_at, date_start, date_end, payload, tags, menu, deleted, campaign, correlation_id'
' Index Cond: (((old_card.user_id)::text = '1234'::text) AND (old_card.user_country = 'BR'::bpchar))'
' Buffers: shared hit=11 read=138 written=35'
'Planning time: 7.748 ms'
'Execution time: 76.755 ms'
77ms on an 8GB database with 167MM rows and almost 500GB in size is amazing!!
Now we are investigating other bottlenecks, is it the creation of a new connection to PG (no connection poller at the moment, like PGBouncer), is it the Lambda start up time? Is it the network performance between PG and Lambda?
I am sorry for wasting your time guys, it helped us to find the problem though, even if it wasn't a PG problem.
BTW, what a performance! I am impressed.
Thanks PG community!