Re: [ADMIN] Processing very large TEXT columns (300MB+) using C/libpq - Mailing list pgsql-admin

From Aldo Sarmiento
Subject Re: [ADMIN] Processing very large TEXT columns (300MB+) using C/libpq
Date
Msg-id CAHX=r6wgC=WvEtgRYbhvNoK80-fqS9QBaJBwqyyU_xv9NpAfuw@mail.gmail.com
Whole thread Raw
In response to [ADMIN] Processing very large TEXT columns (300MB+) using C/libpq  (Cory Nemelka <cnemelka@gmail.com>)
Responses Re: [ADMIN] Processing very large TEXT columns (300MB+) using C/libpq  (Cory Nemelka <cnemelka@gmail.com>)
List pgsql-admin
I believe large columns get put into a TOAST table. Max page size is 8k. So you'll have lots of pages per row that need to be joined with a size like that: https://www.postgresql.org/docs/9.5/static/storage-toast.html

Aldo Sarmiento
President & CTO



8687 Research Dr, Irvine, CA 92618

On Thu, Oct 19, 2017 at 2:03 PM, Cory Nemelka <cnemelka@gmail.com> wrote:
I have getting very poor performance using libpq to process very large TEXT columns (300MB+).   I suspect it is IO related but can't be sure.

Anyone had experience with same issue that can help me resolve?

--cnemelka

pgsql-admin by date:

Previous
From: Cory Nemelka
Date:
Subject: [ADMIN] Processing very large TEXT columns (300MB+) using C/libpq
Next
From: Laurenz Albe
Date:
Subject: Re: [ADMIN] WAL segement issues on both master and slave server