Hi Pavel,
thanks for the information. I've been doing more investigation of this
issue and there's autovacuum running on the table however it's
automatically starting even if there is "autovacuum = off" in the
postgresql.conf configuration file.
The test of rm 5T file was fast and not taking 24 hours already. I guess
the autovacuum is the issue. Is there any way how to disable it? If I
killed the process using 'kill -9' yesterday the process started again.
Is there any way how to cancel this process and disallow PgSQL to run
autovacuum again and do the drop instead?
Thanks,
Michal
On 01/12/2016 12:01 PM, Pavel Stehule wrote:
> Hi
>
> 2016-01-12 11:57 GMT+01:00 Michal Novotny <michal.novotny@trustport.com
> <mailto:michal.novotny@trustport.com>>:
>
> Dear PostgreSQL Hackers,
> I've discovered an issue with dropping a large table (~5T). I was
> thinking drop table is fast operation however I found out my assumption
> was wrong.
>
> Is there any way how to tune it to drop a large table in the matter of
> seconds or minutes? Any configuration variable in the postgresql.conf or
> any tune up options available?
>
>
> drop table should be fast.
>
> There can be two reasons why not:
>
> 1. locks - are you sure, so this statement didn't wait on some lock?
>
> 2. filesystem issue - can you check the speed of rm 5TB file on your IO?
>
> Regards
>
> Pavel
>
>
>
>
>
>
> PostgreSQL version used is PgSQL 9.4.
>
> Thanks a lot!
> Michal
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
> <mailto:pgsql-hackers@postgresql.org>)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>