Re: Large object insert performance. - Mailing list pgsql-general

From Tom Lane
Subject Re: Large object insert performance.
Date
Msg-id 23076.967090721@sss.pgh.pa.us
Whole thread Raw
In response to Large object insert performance.  (Peter Haight <peterh@sapros.com>)
List pgsql-general
Peter Haight <peterh@sapros.com> writes:
> All I'm doing is inserting the large objects.

How many LOs are we talking about here?

The current LO implementation creates a separate table, with index,
for each LO.  That means two files in the database directory per LO.
On most Unix filesystems I've dealt with, performance will go to hell
in a handbasket for more than a few thousand files in one directory.

Denis Perchine did a reimplementation of LOs to store 'em in a single
table.  This hasn't been checked or applied to current sources yet,
but if you're feeling adventurous see the pgsql-patches archives from
late June.

> Is there any way to speed this up? If the handling of large objects is this
> bad, I think I might just store these guys on the file system.

You could do that too, if you don't need transactional semantics for
large-object operations.

            regards, tom lane

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Count & Distinct
Next
From: "Jake"
Date:
Subject: Importing into Postgres from a csv file