Re: BIG files - Mailing list pgsql-novice

From Bruno Wolff III
Subject Re: BIG files
Date
Msg-id 20050619124808.GC32482@wolff.to
Whole thread Raw
In response to BIG files  (rabt@dim.uchile.cl)
Responses Re: BIG files
List pgsql-novice
On Sat, Jun 18, 2005 at 13:45:42 -0400,
  rabt@dim.uchile.cl wrote:
> Hi all Postgresql users,
>
> I've been using MySQL for years and now I have decided to switch to Postgresql,
> because I needed more robust "enterprise" features like views and triggers. I
> work with VERY large datasets: 60 monthly tables with 700,000 rows and 99
> columns each, with mostly large numeric values (15 digits) ( NUMERIC(15,0)
> datatypes, not all filled). So far, I've migrated 2 of my tables to a dedicated
>
> The main problem is disk space. The database files stored in postgres take 4 or
> 5 times more space than in Mysql. Just to be sure, after each bulk load, I
> performed a VACUUM FULL to reclaim any posible lost space, but nothing gets
> reclaimed. My plain text dump files with INSERTS are just 150 Mb in size, while
> the files in Postgres directory are more than 1 Gb each!!. I've tested other
> free DBMS like Firebird and Ingres, but Postgresql is far more disk space
> consumer than the others.

From discussions I have seen here, MYSQL implements Numeric using a floating
point type. Postgres stores it using something like a base 10000 digit
for each 4 bytes of storage. Plus there will be some overhead for storing
the precision and scale. You might be better off using bigint to store
your data. That will take 8 bytes per datum and is probably the same size
as was used in MYSQL.

pgsql-novice by date:

Previous
From: Bruno Wolff III
Date:
Subject: Re: Storing an array to Postgresql table
Next
From: Michael Fuhr
Date:
Subject: Re: BIG files