Thread: Re: 50K record DELETE Begins, 100% CPU, Never Completes 1 hour later
By 32K I meant: sort_mem = 32768 # min 64, size in KB Do you mean to say that this should be sort_mem = 33554432 ? Thanks. cwl > -----Original Message----- > From: Tom Lane [mailto:tgl@sss.pgh.pa.us] > Sent: Thursday, September 11, 2003 3:00 PM > To: Clay Luther > Cc: Pgsql-General (E-mail) > Subject: Re: [GENERAL] 50K record DELETE Begins, 100% CPU, Never > Completes 1 hour later > > > "Clay Luther" <claycle@cisco.com> writes: > > Sort_mem is 32K. > > Try more (like 32M). Particularly in 7.4, you can really > hobble a query > by starving it for sort memory (since that also determines whether > hashtable techniques will be tried). > > regards, tom lane >
"Clay Luther" <claycle@cisco.com> writes: > By 32K I meant: > sort_mem = 32768 # min 64, size in KB Ah, so really 32M. Okay, that is in the realm of reason. But it would still be worth your while to investigate whether performance changes if you kick it up some more notches. If the planner is estimating that you would need 50M for a hash table, it will avoid hash-based plans with this setting. (Look at estimated number of rows times estimated row width in EXPLAIN output to get a handle on what the planner is guessing as the data volume at each step.) The rationale for keeping sort_mem relatively small by default is that you may have a ton of transactions each concurrently doing one or several sorts, and you don't want to run the system into swap hell. But if you have one complex query to execute at a time, you should consider kicking up sort_mem just in that session. regards, tom lane