large object write performance - Mailing list pgsql-performance

From Bram Van Steenlandt
Subject large object write performance
Date
Msg-id 561634BD.2000309@diomedia.be
Whole thread Raw
Responses Re: large object write performance  ("Graeme B. Bell" <graeme.bell@nibio.no>)
Re: large object write performance  ("Graeme B. Bell" <graeme.bell@nibio.no>)
List pgsql-performance
Hi,

I use postgresql often but I'm not very familiar with how it works internal.

I've made a small script to backup files from different computers to a
postgresql database.
Sort of a versioning networked backup system.
It works with large objects (oid in table, linked to large object),
which I import using psycopg

It works well but slow.

The database (9.2.9) on the server (freebsd10) runs on a zfs mirror.
If I copy a file to the mirror using scp I get 37MB/sec
My script achieves something like 7 or 8MB/sec on large (+100MB) files.

I've never used postgresql for something like this, is there something I
can do to speed things up ?
It's not a huge problem as it's only the initial run that takes a while
(after that, most files are already in the db).
Still it would be nice if it would be a little faster.
cpu is mostly idle on the server, filesystem is running 100%.
This is a seperate postgresql server (I've used freebsd profiles to have
2 postgresql server running) so I can change this setup so it will work
better for this application.

I've read different suggestions online but I'm unsure which is best,
they all speak of files which are only a few Kb, not 100MB or bigger.

ps. english is not my native language

thx
Bram


pgsql-performance by date:

Previous
From: "Graeme B. Bell"
Date:
Subject: Re: One long transaction or multiple short transactions?
Next
From: "Graeme B. Bell"
Date:
Subject: Re: large object write performance