RE: Performance Optimisation - Identifying the correct DB - Mailing list pgsql-admin

From Eshara Mondal
Subject RE: Performance Optimisation - Identifying the correct DB
Date
Msg-id BYAPR08MB589506D03D4407D3607904D2B9230@BYAPR08MB5895.namprd08.prod.outlook.com
Whole thread Raw
In response to Performance Optimisation - Identifying the correct DB  (Renjith Gk <renjithgk@gmail.com>)
List pgsql-admin

Hello,

 

As I understand you should not have any issues with reading 200K records in postgres.  Unless your tables are being heavily written to at the same time or over indexed (or not indexed at all), you could also check how much your tables have ‘bloated’ to see whether or not you need to run a vacuum on it. 

 

SELECT schemaname, relname, n_live_tup, n_dead_tup, trunc(100*n_dead_tup/(n_live_tup+1))::float "ratio%",

to_char(last_autovacuum, 'YYYY-MM-DD HH24:MI:SS') as autovacuum_date,

to_char(last_autoanalyze, 'YYYY-MM-DD HH24:MI:SS') as autoanalyze_date

FROM pg_stat_all_tables

ORDER BY last_autovacuum;

 

This can help by showing you how many dead tuples you have laying around on your tables.  Also shows you the last time your tables were analyzed and autovacuumed by the vacuum daemon.

 

Hope that helps a bit,

Eshara

 

 

From: Renjith Gk <renjithgk@gmail.com>
Sent: Tuesday, April 23, 2019 9:07 AM
To: pgsql-admin@lists.postgresql.org
Subject: Performance Optimisation - Identifying the correct DB

 

Hello Friends,

 

I am seeking a better suggestion for Identifying the correct DB as part of READ operation (fetching queries).

 

What is the optimal execution time for Reading 200k records in Postgres. We had issues for reading records in cassandra which faced time out for ~200K records.

 

any ideal solution or recommendation towards Postgres.

 

Best regards,

Renjith

pgsql-admin by date:

Previous
From: bricklen
Date:
Subject: Re: A special tool for PostgreSQL which checks its operability
Next
From: "Ravi Krishna"
Date:
Subject: Re: Performance Optimisation - Identifying the correct DB