Re: [GENERAL] storing large files in database - performance - Mailing list pgsql-general

From Merlin Moncure
Subject Re: [GENERAL] storing large files in database - performance
Date
Msg-id CAHyXU0wfuKeQ-o=8YGR80gxgE2saR67+OX2TA-v6tnuQOtEKbA@mail.gmail.com
Whole thread Raw
In response to Re: [GENERAL] storing large files in database - performance  (Thomas Kellerer <spam_eater@gmx.net>)
Responses Re: [GENERAL] storing large files in database - performance  (Eric Hill <Eric.Hill@jmp.com>)
List pgsql-general
On Tue, May 16, 2017 at 9:51 AM, Thomas Kellerer <spam_eater@gmx.net> wrote:
> John R Pierce schrieb am 16.05.2017 um 16:44:
>> On 5/16/2017 7:35 AM, Thomas Kellerer wrote:
>>> When my (JDBC based) SQL client and the database server are on the same computer...
>>
>> node.js is Javascript, not java w/ jdbc
>
> I know that.
>
> I mentioned JDBC so that it's clear that the timings were done using a different technology
>
> Maybe it's Node.js or the JavaScript "driver" that causes the problems.

When writing large objects to the database, method of transmission
will very much determine performance until you start hitting the
natural boundaries imposed by the database.

via (hastily written):
#include "libpqtypes.h"
#include "stdlib.h"
#include "string.h"

int main()
{
  int s = 1024 * 1024 * 256;

  char *p = malloc(s);
  memset(p, 'x', s);
  p[s-1] = 0;

  PGconn *conn = PQconnectdb("");
  PQinitTypes(conn);

  PGresult *res = PQexecf(conn, "insert into foo values(1,%text)", p);

  if(!res)
    fprintf(stderr, "*ERROR: %s\n", PQgeterror());

  PQclear(res);
}

mmoncure@mernix2 09:13 AM /tmp$ gcc -otest test.c -lpq -lpqtypes -I
/home/mmoncure/src/libpqtypes-1.5.1/src -I
/home/mmoncure/pg94/include/ -L
/home/mmoncure/src/libpqtypes-1.5.1/.libs/
mmoncure@mernix2 09:13 AM /tmp$ psql -c "create table foo(i int, f text)"
CREATE TABLE
mmoncure@mernix2 09:13 AM /tmp$ psql -c "alter table foo alter f set
storage external"
ALTER TABLE
mmoncure@mernix2 09:14 AM /tmp$ time
LD_LIBRARY_PATH=/home/mmoncure/src/libpqtypes-1.5.1/.libs ./test

real 0m3.245s
user 0m0.092s
sys 0m0.232s
mmoncure@mernix2 09:15 AM /tmp$ psql -c "select
pg_size_pretty(pg_table_size('foo'))"
 pg_size_pretty
----------------
 266 MB
(1 row)


...that's over 76mb/sec (to local server) for 256mb transfer.  That's
pretty good I think.   We have a 1GB barrier on bytea/text and (at
least in C, with certain reasonable precautions) you can work
comfortably under that limit.  There might be other better strategies
but it can be done.

merlin


pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: [GENERAL] database is not accepting commands
Next
From: Martin Goodson
Date:
Subject: Re: [GENERAL] EnterpriseDB installed PostgreSQL 9.6 vs. REPMGR. Round4 - compilation issues on RHEL 7.2