Re: TB-sized databases - Mailing list pgsql-performance

From Pablo Alcaraz
Subject Re: TB-sized databases
Date
Msg-id 474D81CA.9080601@laotraesquina.com.ar
Whole thread Raw
In response to Re: TB-sized databases  (Pablo Alcaraz <pabloa@laotraesquina.com.ar>)
Responses Re: TB-sized databases
List pgsql-performance
Pablo Alcaraz wrote:
> Simon Riggs wrote:
>> All of those responses have cooked up quite a few topics into one. Large
>> databases might mean text warehouses, XML message stores, relational
>> archives and fact-based business data warehouses.
>>
>> The main thing is that TB-sized databases are performance critical. So
>> it all depends upon your workload really as to how well PostgreSQL, or
>> another other RDBMS vendor can handle them.
>>
>>
>> Anyway, my reason for replying to this thread is that I'm planning
>> changes for PostgreSQL 8.4+ that will make allow us to get bigger and
>> faster databases. If anybody has specific concerns then I'd like to hear
>> them so I can consider those things in the planning stages
> it would be nice to do something with selects so we can recover a
> rowset on huge tables using a criteria with indexes without fall
> running a full scan.
>
> In my opinion, by definition, a huge database sooner or later will
> have tables far bigger than RAM available (same for their indexes). I
> think the queries need to be solved using indexes enough smart to be
> fast on disk.
>
> Pablo

I am dealing with a very huge database. I am not sure if all these
things could be solved with the current Postgres version using somes
configuration parameters. I ll be happy to read your suggestions and
ideas about these queries.

In my opinion there are queries that I think they ll need to be tuned
for "huge databases" (huge databases = a database which relevant
tables(indexes) are (will be) far bigger that all the ram available):

-- example table
CREATE TABLE homes (
        id bigserial,
        name text,
        location text,
        bigint money_win,
        int zipcode;
);
CREATE INDEX money_win_idx ON homes(money_win);
CREATE INDEX zipcode_idx ON homes(zipcode);


SELECT max( id) from homes;
I think the information to get the max row quickly could be found using
the pk index. Idem min( id).

SELECT max( id) from homes WHERE id > 8000000000;
Same, but useful to find out the same thing in partitioned tables (using
id like partition criteria). It would be nice if Postgres would not need
the WHERE clause to realize it does not need to scan every single
partition, but only the last. Idem min(id).

SELECT * from homes WHERE money_win = 1300000000;
Postgres thinks too easily to solve these kind of queries that it must
to do a sequential scan where the table (or the index) does not fix in
memory if the number of rows is not near 1 (example: if the query
returns 5000 rows). Same case with filters like 'WHERE money_win >= xx',
'WHERE money_win BETWEEN xx AND yy'. But I do not know if this behavior
is because I did a wrong posgresql's configuration or I missed something.

SELECT count( *) from homes;
it would be *cute* that Postgres stores this value and only recalculate
if it thinks the stored value is wrong (example: after an anormal
shutdown).

SELECT zipcode, count( zipcode) as n from homes GROUP BY zipcode;
it would be *very cute* that Postgres could store this value (or is this
there?) on the index or wherever and it only recalculates if it thinks
the stored value is wrong (example: after an anormal shutdown).

In my opinion, partitioned tables in "huge databases" would be the
usual, not the exception. It would be important (for me at least) that
these queries could be fast solved when they run in partitioned tables.

Maybe one or more of these queries could be solved using some kind of
optimization. But I do not discover which ones (I ll be happy to read
suggestions :D). I am sure a lot/all these queries could be solved using
some kind of triggers/sequence to store information to solve the stuff.
But in general the information is there right now (is it there?) and the
queries only need that the server could look in the right place. A
trigger/function using some pgsql supported languages probably will
consume far more CPU resources to find out the same information that
exist right now and we need to do it using transactions (more perfomance
costs) only to be sure we are fine if the server has an anormal shutdown.

Currently I have several 250Gb+ tables with billions of rows (little
rows like the homes table example). I partitioned and distributed the
partitions/index in different tablespaces, etc. I think "I did not need"
so much partitions like I have right now (300+ for some tables and
growing). I just would need enough partitions to distribute the tables
in differents tablespaces. I did so much partitions because the
perfomance with really big tables is not enough good for me when the
programs run these kind of queries and the insert/update speed is worst
and worst with the time.

I hope that a couple of tables will be 1Tb+ in a few months... buy more
and more RAM is an option but not a solution because eventually the
database will be far bigger than ram available.

Last but not least, it would be *excelent* that this kind of
optimization would be posible without weird non standard sql sentences.
I think that Postgresql would be better with huge databases if it can
solve for itself these kind of queries in the fastest way or at least we
are abled to tell it to choice a different criteria. I could help it
using postgresql.conf to activate/deactivate some behavior or to use
some system table to tell the criteria I want with some tables (like
autovacuum does right now with table exception vacuums) or using non
standard DDL to define that criteria.

But the thing is that the programmers must be able to use standard SQL
for selects/inserts/updates/deletes with 'where' and 'group by' clauses.
In my case the programs are builded with java + JPA, so standard SQL
(but no DDL) is important to keep the things like they are. :)

Well, that's my 2cents feedback.

Regards

Pablo

PD: Sorry my broken english.

pgsql-performance by date:

Previous
From: Craig James
Date:
Subject: Re: Query only slow on first run
Next
From: Alvaro Herrera
Date:
Subject: Re: TB-sized databases