Hello everyone,
Testing 100 million records data import from Azure blob storage to Azure postgresql. I did run the test 5 times and the time it took keep increasing for each run.
Is there know justification for this linear increment of the time it took for same size of data?
1. What version of PG is it? ("SELECT VERSION();" should tell you.)
2. Are you truncating the table after each test run, or deleting all records, or appending?
3. Is the blob data stored in BYTEA column data, or are you using the (discouraged) "Large Objects"?
4. How are you loading the blob data?
-- Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.