Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ] - Mailing list pgsql-hackers

From Dilip kumar
Subject Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ]
Date
Msg-id 4205E661176A124FAF891E0A6BA913526592482E@SZXEML507-MBS.china.huawei.com
Whole thread Raw
In response to Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ]  (Jan Lentfer <Jan.Lentfer@web.de>)
Responses Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ]
List pgsql-hackers

On 08 November 2013 13:38, Jan Lentfer

 

> For this use case, would it make sense to queue work (tables) in order of their size, starting on the largest one?

> For the case where you have tables of varying size this would lead to a reduced overall processing time as it prevents large (read: long processing time) tables to be processed in the last step. While processing large tables at first and filling up "processing slots/jobs" when they get free with smaller tables one after the other would safe overall execution time.

Good point, I have made the change and attached the modified patch.

 

 

Regards,

Dilip

Attachment

pgsql-hackers by date:

Previous
From: Rajeev rastogi
Date:
Subject: Re: TODO: Split out pg_resetxlog output into pre- and post-sections
Next
From: Robert Haas
Date:
Subject: Re: shared memory message queues