If you are looking for speed, I would make the whole thing as arrays in memory
in C++, and just do backups to the database on a regular basis.
If you need continuous non loss of data, or true SQL compatibility and or
portablity in your application, that won't apply.
Guillaume Houssay wrote:
> I am setting up a project using APache, PHP and Postgresql.
> This application will be used by about 30 users.
>
> The database is about this type :
>
> between 12GB and 15GB
> 4 tables will have 1Million rows and 1000 columns with 90% of INT2 and
> the rest of float (20% of all the data will be 0)
> the orther tables are less than 10 000 rows
>
> Most of the queries will be SELECT being not very complicated (I think
> at this time)
>
> I have 1 question regarding the hardware configuration :
>
> DELL
> bi-processor 2.8GHz
> 4GB RAM
> 76GB HD using Raid 5
> Linux version to be defined (Redhat ?)
>
> Do you think this configuration is enough to have good performance after
> setting up properly the database ?
>
> Do you thing the big tables should be splitted in order to have less
> columns. This could mean that I would have some queries with JOIN ?
>
> Thank you for your help !