Re: Delete huge Table under XFS - Mailing list pgsql-performance

From Tomas Vondra
Subject Re: Delete huge Table under XFS
Date
Msg-id 20191006205452.5vx3qqxwjbxc6jzh@development
Whole thread Raw
In response to Re: Delete huge Table under XFS  (Joao Junior <jcoj2006@gmail.com>)
List pgsql-performance
On Thu, Sep 19, 2019 at 07:00:01PM +0200, Joao Junior wrote:
>A table with 800 gb means 800 files of 1 gb. When I use truncate or drop
>table,  xfs that is a log based filesystem,  will write lots of data in its
>log and this is the problem. The problem is not postgres, it is the way
>that xfs works with big files , or being more clear, the way that it
>handles lots of files.
>

I'm a bit skeptical about this explanation. Yes, XFS has journalling,
but only for metadata - and I have a hard time believing deleting 800
files (or a small multiple of that) would write "lots of data" into the
jornal, and noticeable performance issues. I wonder how you concluded
this is actually the problem.

That being said, TRUNCATE is unlikely to perform better than DROP,
because it also deletes all the files at once. What you might try is
dropping the indexes one by one, and then the table. That should delete
files in smaller chunks.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



pgsql-performance by date:

Previous
From: Tomas Vondra
Date:
Subject: Re: Slow PostgreSQL 10.6 query
Next
From: Tomas Vondra
Date:
Subject: Re: Out of Memory errors are frustrating as heck!