Thread: Large objects - bug? caveat? feature?

Large objects - bug? caveat? feature?

From
"Justin Long"
Date:
I am using PHP3 with Apache on top of Linux, using beta 2 of PostgreSQL. I
have uncovered an interesting (bug/feature?) in the Large Objects.

If (using PHP3) you write out, say, a 2,000 byte large object (a text
article in this case), and then using an edit routine you replace that 2,000
object with a more concise, edited 1,000-byte document, what you get is the
1,000-bytes of the edit, followed by the final 1,000 bytes of the original
2,000 byte article. In other words, it doesn't shrink the file to the edited
size.

Secondly, I notice that in my data/base/... area that whenever I create an
object it creates a single file on the disk. Does that mean that if I have
100,000 articles in my knowledge base, that it is possible that I will have
100,000 files individual 8-to-10k files on my hard drive? Does Linux suffer
degradation in performance when having that many files in a directory?

Justin


Never retreat. Never surrender. Never cut a deal with a dragon.
_______________________________________________________________
Justin Long                   CIO / Site Editor
616 Station Square Ct         Network for Strategic Missions
Chesapeake, VA 23320          977 Centerville Trnpk CSB 317
JustinLong@xc.org             Va Beach, VA 23463
Check out our site at:        http://www.strategicnetwork.org



Re: [SQL] Large objects - bug? caveat? feature?

From
Chris Bitmead
Date:
Justin Long wrote:

> 2,000 byte article. In other words, it doesn't shrink the file > to the edited size.

Does the interface have any equivilent to the UNIX O_TRUNC?

> Secondly, I notice that in my data/base/... area that whenever I create an
> object it creates a single file on the disk. Does that mean that if I have
> 100,000 articles in my knowledge base, that it is possible that I will have
> 100,000 files individual 8-to-10k files on my hard drive?

It's worse than that I think. I believe you get _two_ files for each
large object. Large objects really suck badly.

> Does Linux suffer
> degradation in performance when having that many files in a 
> directory?

Absolutely does suffer. Even worse, your regular database tables are in
the same directory, so they'll suffer too. Another problem, is pgdump
doesn't dump large objects so you have to figure out some other backup
strategy.

-- 
Chris Bitmead
http://www.bigfoot.com/~chris.bitmead
mailto:chris.bitmead@bigfoot.com