This delete runs in a single transaction. That means the entire transaction
has to complete before you will see anything deleted. Interrupting the
transaction simply rolls it back, so nothing is deleted.
Tom already pointed out the potential foreign key slowdown, another slowdown
may simply be drive speed.
My recommendation: cut the delete in chunks. For example delete the data one
week at a time. That way the transaction is smaller, the dataset to delete is
smaller and it will finish quicker.
Uwe
On Sun, May 03, 2015 03:24:25 AM Mitu Verma wrote:
> Hi,
>
> I am facing an issue with the deletion of huge data.
> We have a cronscript which is used to delete the data of last 3 months from
> one of the tables. Data in the table is large (8872597 as you can see the
> count below) since it is from last 3 months.
>
> fm_db_Server3=# select count(*) from audittraillogentry ;
>
>
> count
> ---------
> 8872597
> (1 row)
>
> Now issue is that when this script for the deletion of data is launched , it
> is taking more than 7 days and doing nothing i.e not a single row has been
> deleted.
>
> Then we stopped the script,terminated the database sessions by using SELECT
> pg_terminate_backend(proc pid) and run the following command
>
> delete from audittraillogentry where intime <= to_timestamp('2015-01-30
> 23:59:59.999', 'YYYY/MM/DD-HH24:MI:SS.FF3') OR outtime <=
> to_timestamp('2015-01-30 23:59:59.999', 'YYYY/MM/DD-HH24:MI:SS.FF3'); still
> this delete operation is not working and not a single row has been deleted
> from the table.
>
> Now i have following questions -
>
> 1. If postgreSQL has some limitations for deletion of large data?
> 2. If i should run the vacumm, after stopping the cron script ? because
> probably to get the "smaller" table? 3. if dropping the indexes can help
> here? now sure.
> 4.if i should think about partitioning , if there is any limitation while
> delaing with large data in postgreSQL?
>
> regards
> Mitu
>
> _____
> ___________________________________