Re: Pg and compress - Mailing list pgsql-general

From Jov
Subject Re: Pg and compress
Date
Msg-id CADyrUxMDxbiRh4rgz9xwbUCAiXx7tkAOyqKVr59sH8BTo7S1nw@mail.gmail.com
Whole thread Raw
In response to Re: Pg and compress  (John R Pierce <pierce@hogranch.com>)
Responses Re: Pg and compress  (John R Pierce <pierce@hogranch.com>)
List pgsql-general

Most are bigint and one field is varchar.
There is no index.

在 2011-9-27 上午3:34,"John R Pierce" <pierce@hogranch.com>写道:
>
> On 09/26/11 6:59 AM, Jov wrote:
>>
>>
>> Hi all,
>> We are going to use pg as data warehouse,but after some test,we found that plain text with csv format is 3 times bigger when load to pg.we use copy to load data.we try some optimize and it reduce to 2.5 times bigger.other db can  avarage compress  to 1/3 of the plain text.bigger data means heavy io.
>> So my question is how to make data compressed in pg?is some fs  such as zfs,berfs with compression feature can work well with pg?
>>
>
> your source data is CSV, what data types are the fields in the table(s) ? do you have a lot of indexes on this table(s)?
>
>
>
> --
> john r pierce                            N 37, W 122
> santa cruz ca                         mid-left coast
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

pgsql-general by date:

Previous
From: planas
Date:
Subject: Re: Quick-and-Dirty Data Entry with LibreOffice3?
Next
From: John R Pierce
Date:
Subject: Re: Pg and compress