Re: more than 2GB data string save - Mailing list pgsql-general

From AI Rumman
Subject Re: more than 2GB data string save
Date
Msg-id 2a7905441002092238y3d2c96ecrcec7ac7126251746@mail.gmail.com
Whole thread Raw
In response to Re: more than 2GB data string save  (Steve Atkins <steve@blighty.com>)
Responses Re: more than 2GB data string save  (Steve Atkins <steve@blighty.com>)
List pgsql-general
Thanks for your quick answes.
 
But if I use a file and then store the name in the database, is it possible to use TEXT search tsvector and tsquery indexing on these external files?

On Wed, Feb 10, 2010 at 12:26 PM, Steve Atkins <steve@blighty.com> wrote:

On Feb 9, 2010, at 9:52 PM, Scott Marlowe wrote:

> On Tue, Feb 9, 2010 at 9:38 PM, AI Rumman <rummandba@gmail.com> wrote:
>> How to save 2 GB or more text string in Postgresql?
>> Which data type should I use?
>
> If you have to you can use either the lo interface, or you can use
> bytea.  Large Object (i.e. lo) allows for access much like fopen /
> fseek  etc in C, but the actual data are not stored in a row with
> other data, but alone in the lo space.  Bytea is a legit type that you
> can have as one of many in a row, but you retrieve the whole thing at
> once when you get the row.

Bytea definitely won't handle more than 1 GB. I don't think the lo interface
will handle more than 2GB.

>
> Preferred way to store 2GB data is to put it into a file and put the
> name of the file into the database.


This.

Cheers,
 Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

pgsql-general by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: more than 2GB data string save
Next
From: Greg Smith
Date:
Subject: Re: Best way to handle multi-billion row read-only table?