Tom Lane wrote:
> Colin Wetherbee <cww@denterprises.org> writes:
>> Let's say I have a users table that holds about 15 columns of data about
>> each user.
>
>> If I write one Perl sub for each operation on the table (e.g. one that
>> gets the username and password hash, another that gets the last name and
>> first name, etc.), there will be a whole lot of subs, each of which
>> performs one very specific task.
>
>> If I write one larger Perl sub that grabs the whole row, and then I deal
>> with the contents of the row in Perl, ignoring columns as I please, it
>> will require fewer subs and, in turn, imply cleaner code.
>
>> My concern is that I don't know what efficiency I would be forfeiting on
>> the PostgreSQL side of the application by always querying entire rows if
>> my transaction occurs entirely within a single table.
>
> Not nearly as much as you would lose anytime you perform two independent
> queries to fetch different fields of the same row. What you really need
> to worry about here is making sure you only fetch the row once
> regardless of which field(s) you want out of it. It's not clear to me
> whether your second design concept handles that, but if it does then
> I think it'd be fine.
Yes, the second design concept would handle that.
> The only case where custom field sets might be important is if you have
> fields that are wide enough to potentially get TOASTed (ie more than a
> kilobyte or so apiece). Then it'd be worth the trouble to not fetch
> those when you don't need them. But that apparently isn't the case
> with this table.
Sounds good.
Thanks.
Colin