Re: need suggestion on querying big tables - Mailing list pgsql-admin

From Ron
Subject Re: need suggestion on querying big tables
Date
Msg-id 8c748c20-84bd-ed7b-c10f-35d640c6994b@gmail.com
Whole thread Raw
In response to Re: need suggestion on querying big tables  (Holger Jakobs <holger@jakobs.com>)
Responses Re: need suggestion on querying big tables  (Mladen Gogala <gogala.mladen@gmail.com>)
List pgsql-admin
On 12/5/22 01:35, Holger Jakobs wrote:
Am 05.12.22 um 07:29 schrieb Ebin Jozer:
Hi Team,
in postgresql 11 version we have two tables of size 435 GB and 347 GB. 
if we query on single table or if we are doing inner join on both the big tables, it is not displacing any output, it keeps running 

We can see the wait event is IO and directDatafile .

Server Spec : 8 cores and  64GB RAM
PG config : 53 GB(effective_cache), 12 GB(shared buffer) 

can you please suggest some ideas , how we can query on big tables and fasten them to get the output??

Thanks & Regards,
Ebin

There are basically two "secrets" to performance in this case:

  1. make sure that the join operation uses indexes
  2. make sure that you have enough work_mem available

work_mem can be set in postgresql.conf, but also in every session individually. So don't set it too high generally, because it will be allocated for every sorting and hashing operation.


A "just smart enough" user could perform an inadvertent Denial of Service attack on the database by cranking his session work_mem really high, no?

--
Angular momentum makes the world go 'round.

pgsql-admin by date:

Previous
From: Paul Smith
Date:
Subject: Re: need suggestion on querying big tables
Next
From: John Wiencek
Date:
Subject: Re: need suggestion on querying big tables