> On Jan 7, 2020, at 12:01 PM, Adrian Klaver <adrian.klaver@aklaver.com> wrote:
>
> On 1/7/20 12:47 PM, Israel Brewster wrote:
>> One potential issue I just thought of with this approach: disk space. Will I be doubling the amount of space used
whileboth tables exist? If so, that would prevent this from working - I don’t have that much space available at the
moment.
>
> It will definitely increase the disk space by at least the data in the new table. How much relative to the old table
isgoing to depend on how aggressive the AUTOVACUUM/VACUUM is.
>
> A suggestion for an alternative approach:
>
> 1) Create a table:
>
> create table change_table(id int, changed_fld some_type)
>
> where is is the PK from the existing table.
>
> 2) Run your conversion function against existing table with change to have it put new field value in change_table
keyedto id/PK. Probably do this in batches.
>
> 3) Once all the values have been updated, do an UPDATE set changed_field = changed_fld from change_table where
existing_table.pk= change_table.id;
Makes sense. Use the fast SELECT to create/populate the other table, then the update can just be setting a value, not
havingto call any functions. From what you are saying about updates though, I may still need to batch the UPDATE
section,with occasional VACUUMs to maintain disk space. Unless I am not understanding the concept of “tuples that are
obsoletedby an update”, which is possible.
>
>> ---
>> Israel Brewster
>> Software Engineer
>> Alaska Volcano Observatory
>> Geophysical Institute - UAF
>> 2156 Koyukuk Drive
>> Fairbanks AK 99775-7320
>> Work: 907-474-5172
>> cell: 907-328-9145
>
>
> --
> Adrian Klaver
> adrian.klaver@aklaver.com