How big is your work_mem setting, and is this behavior affected by its size?
You can increase the work_mem on an individual connection before the test.
Simply:
set work_mem = '100MB'
to set it to 100 Megabytes. If your issue is spilling data out of work_mem to the temp storage, this setting will affect that.
On Thu, Sep 18, 2008 at 10:30 AM, Nikolas Everett <nik9000@gmail.com> wrote:
List,
I'm a bit confused as to why this query writes to the disk: SELECT count(*) FROM bigbigtable WHERE customerid IN (SELECT customerid FROM smallcustomertable) AND x != 'special' AND y IS NULL
It writes a whole bunch of data to the disk that has the tablespace where bigbigtable lives as well as writes a little data to the main disk. It looks like its is actually WAL logging these writes.
Here is the EXPLAIN ANALYZE: Aggregate (cost=46520194.16..46520194.17 rows=1 width=0) (actual time=4892191.995..4892191.995 rows=1 loops=1) -> Hash IN Join (cost=58.56..46203644.01 rows=126620058 width=0) (actual time=2.938..4840349.573 rows=79815986 loops=1) Hash Cond: ((bigbigtable.customerid)::text = (smallcustomertable.customerid)::text) -> Seq Scan on bigbigtable (cost=0.00..43987129.60 rows=126688839 width=11) (actual time=0.011..4681248.143 rows=128087340 loops=1) Filter: ((y IS NULL) AND ((x)::text <> 'special'::text)) -> Hash (cost=35.47..35.47 rows=1847 width=18) (actual time=2.912..2.912 rows=1847 loops=1) -> Seq Scan on smallcustomertable (cost=0.00..35.47 rows=1847 width=18) (actual time=0.006..1.301 rows=1847 loops=1) Total runtime: 4892192.086 ms
Can someone point me to some documentation as to why this writes to disk?