Re: How to do faster DML - Mailing list pgsql-general

From Greg Sabino Mullane
Subject Re: How to do faster DML
Date
Msg-id CAKAnmmLcTS7zpeZ3m7mkJxGsu+f+Tv9bp5KV3T52Q+c8DMTm+A@mail.gmail.com
Whole thread Raw
In response to Re: How to do faster DML  (veem v <veema0000@gmail.com>)
Responses Re: How to do faster DML  (veem v <veema0000@gmail.com>)
Re: How to do faster DML  ("Peter J. Holzer" <hjp-pgsql@hjp.at>)
List pgsql-general
On Mon, Feb 12, 2024 at 1:50 AM veem v <veema0000@gmail.com> wrote:
So we were thinking, adding many column to a table should be fine in postgres (as here we have a use case in which total number of columns may go till ~500+). But then,  considering the access of columns towards the end of a row is going to add more time as compared to the column which is at the start of the row. As you mentioned, accessing 100th column may add 4 to 5 times more as compared to the access of first column. So , is it advisable here to go for similar approach of breaking the table into two , if the total number of column reaches certain number/threshold for a table?

I'm not sure of what Peter was testing exactly to get those 4-5x figures, but I presume that is column access time, which would not mean a direct effect on your total query time of 4-5x. As far as breaking the table in two, I would not recommend that as it adds complexity and introduces other problems. On the other hand, 500 columns is pretty extreme, so maybe things rarely accessed or searched for could go in another table. Really hard to say without knowing more about your data access patterns. Hopefully, we never see a "SELECT *" for a table like that! :)

Cheers,
Greg

pgsql-general by date:

Previous
From: Greg Sabino Mullane
Date:
Subject: Re: How to do faster DML
Next
From: Adrian Klaver
Date:
Subject: Re: Query hangs (and then timeout) after using COPY to import data