Re: large object write performance - Mailing list pgsql-performance

From Graeme B. Bell
Subject Re: large object write performance
Date
Msg-id 3D2F06B8-297D-48CF-AA8C-47FD70925863@skogoglandskap.no
Whole thread Raw
In response to large object write performance  (Bram Van Steenlandt <bram@diomedia.be>)
List pgsql-performance
Seems a bit slow.

1. Can you share the script (the portion that does the file transfer) to the list? Maybe you’re doing something unusual
thereby mistake. 
Similarly the settings you’re using for scp.

2. What’s the network like?
For example, what if the underlying network is only capable of 10MB/s peak, and scp is using compression and the files
arehighly compressible? 
Have you tried storing zip or gzip’d versions of the file into postgres? (that’s probably a good idea anyway)

3. ZFS performance can depend on available memory and use of caches (memory + L2ARC for reading, ZIL cache for
writing).
Maybe put an intel SSD in there (or a pair of them) and use it as a ZIL cache.

4. Use dd to measure the write performance of ZFS doing a local write to the machine. What speed do you get?

5. Transfer a zip’d file over the network using scp. What speed do you get?

6. Is your postgres running all the time or do you start it before this test? Perhaps check if any background tasks are
runningwhen you use postgres - autovacuum, autoanalyze etc.  

Graeme Bell

> On 08 Oct 2015, at 11:17, Bram Van Steenlandt <bram@diomedia.be> wrote:
>
> Hi,
>
> I use postgresql often but I'm not very familiar with how it works internal.
>
> I've made a small script to backup files from different computers to a postgresql database.
> Sort of a versioning networked backup system.
> It works with large objects (oid in table, linked to large object), which I import using psycopg
>
> It works well but slow.
>
> The database (9.2.9) on the server (freebsd10) runs on a zfs mirror.
> If I copy a file to the mirror using scp I get 37MB/sec
> My script achieves something like 7 or 8MB/sec on large (+100MB) files.
>
> I've never used postgresql for something like this, is there something I can do to speed things up ?
> It's not a huge problem as it's only the initial run that takes a while (after that, most files are already in the
db).
> Still it would be nice if it would be a little faster.
> cpu is mostly idle on the server, filesystem is running 100%.
> This is a seperate postgresql server (I've used freebsd profiles to have 2 postgresql server running) so I can change
thissetup so it will work better for this application. 
>
> I've read different suggestions online but I'm unsure which is best, they all speak of files which are only a few Kb,
not100MB or bigger. 
>
> ps. english is not my native language
>
> thx
> Bram
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance



pgsql-performance by date:

Previous
From: Bram Van Steenlandt
Date:
Subject: large object write performance
Next
From: "Graeme B. Bell"
Date:
Subject: Re: large object write performance