I’d just do it as a single update in one transaction. It’s only 50M rows. It may take half an hour or so on decent hardware, depending on how resource-intensive your function is.
I must emphasize: This estimate is HIGHLY dependent on hardware and the complexity of the table (number of indices, etc). (I suspect there’s a correlation between table size (business value) and number of indices)
I’m thinking it might be worth it to do a “quick” test on 1,000 or so records (or whatever number can run in a minute or so), watching the processor utilization as it runs. That should give me a better feel for where the bottlenecks may be, and how long the entire update process would take. I’m assuming, of course, that the total time would scale more or less linearly with the number of records.
---
Israel Brewster Software Engineer Alaska Volcano Observatory Geophysical Institute - UAF 2156 Koyukuk Drive Fairbanks AK 99775-7320
Work: 907-474-5172 cell: 907-328-9145
If that fails[1], only then would I start looking into batching things. But then you still need to figure out why it fails and what to do about that; if it fails it will probably fail fast, and if not, then you’re looking at a one-off situation that won’t require more than a few workarounds - after which you can just run the update again.
Ad 1). No harm has been done, it’s a single transaction that rolled back.
Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll find there is no forest.