Re: [HACKERS] Serious performance problem - Mailing list pgadmin-hackers

From Jean-Michel POURE
Subject Re: [HACKERS] Serious performance problem
Date
Msg-id 4.2.0.58.20011030125000.00a53100@pop.freesurf.fr
Whole thread Raw
List pgadmin-hackers
>For example I can assure in my application that the index, once created
>will be valid, because I just want to read in a new set of data once
>a day (from the MS-SQL Server which collects data over the day).  So
>I could recreate all indices after the import and the database is
>readonly until the next cron job.  Is there any chance to speed up
>those applications?

Hello Andreas,

Is your database read-only? Good point, sorry to insist your problem is
software optimization. In your case, the database may climb up to 200
million rows (1000 days x 200.000 rows). What are you going to do then? Buy
a 16 Itanium computer with 10 Gb RAM and MS SQL Server licence. Have a
close look at your problem. How much time does it get MS SQL Server to
query 200 million rows ? The problem is not in choosing MS SQL or
PostgreSQL ...

If you are adding 200.000 rows data everyday, consider using a combination
of CREATE TABLE AS to create a result table with PL/pgSQL triggers to
maintain data consistency. You will then get instant results, even on 2
billion rows because you will always query the result table; not the
original one. Large databases are always optimized this way because, even
in case of smart indexes, there are things (like your problem) that need
*smart* optimization.

Do you need PL/pgSQL source code to perform a test on 2 billion rows? If
so, please email me on pgsql-general and I will send you the code.

Best regards,
Jean-Michel POURE

pgadmin-hackers by date:

Previous
From: Dave Page
Date:
Subject: Re: DROP/CREATE
Next
From: Dave Page
Date:
Subject: New Graphics