Thread: Suitability of postgres for very high transaction volume

Suitability of postgres for very high transaction volume

From
Alex Avriette
Date:
I'm intending to use postgres as a new backend for a server I am running.
The throughput is roughly 8gb per day over 10,000 concurrent connections. At
the moment, the software in question is using complex hashes and b-trees. My
feeling was that the people who wrote postgres were more familiar with
complex data storage, and it would be faster to offload to postgres the task
of indexing files and whatnot. So its function would be as a
pseudo-filesystem with searching capabilities and also as a
userdb/authenticationdb. I'm using perl's POE, so there could conceivably be
several dozen to even a hundred or more concurrent queries. The amount of
data exchange in these queries would be very small. But over the course of a
day, it will add up to quite a bit. The server in question has a gig of ram
and sits on a T1.

At the moment, I use postgres for storing phenomenal amounts of data
(terabyte scale), but the transaction load is very small in comparison. (the
server I am migrating gets something like 6M - 9M hits/day)

Has anyone attempted to use postgres in this fashion? Are there steps I
should take here?

Thanks,
alex

--
alex j. avriette
perl hacker.
a_avriette@acs.org
$dbh -> do('unhose');

Limit of sequence

From
"Mourad EL HADJ MIMOUNE"
Date:
Hello all,
I use sequence to generate an ID for each row by using a same sequence for
all tables (it's a same as an OID). I want know the limit of number
genereted by a sequence. Did sequence use short or long integer?
I use my own ID instead of system OID because I can't use it as foreign key.
I use now a 7.1.3 version of Postgres. Is the 7.2 version ready or not yet
and did it support the reference to an OID.
Thanks.
Mourad.



Re: Limit of sequence

From
Holger Krug
Date:
On Mon, Dec 10, 2001 at 04:30:45PM +0100, Mourad EL HADJ MIMOUNE wrote:
>
> I use sequence to generate an ID for each row by using a same sequence for
> all tables (it's a same as an OID). I want know the limit of number
> genereted by a sequence. Did sequence use short or long integer?

From the `HISTORY' file of PostgreSQL CVS:

   Sequences now use int8 internally (Tom)

From file `backend/commands/sequence.c':

Datum
nextval(PG_FUNCTION_ARGS)
{
..
        PG_RETURN_INT64(result);
}




--
Holger Krug
hkrug@rationalizer.com

Re: Limit of sequence

From
Stephan Szabo
Date:
On Mon, 10 Dec 2001, Mourad EL HADJ MIMOUNE wrote:

>
> Hello all,
> I use sequence to generate an ID for each row by using a same sequence for
> all tables (it's a same as an OID). I want know the limit of number
> genereted by a sequence. Did sequence use short or long integer?
> I use my own ID instead of system OID because I can't use it as foreign key.
> I use now a 7.1.3 version of Postgres. Is the 7.2 version ready or not yet
> and did it support the reference to an OID.

7.2 is still in beta, and I believe it does support oid references,
although you're still almost certainly better off using your own id (esp
since you can get int8 sequences and you don't have to share them with
other tables).



Re: Suitability of postgres for very high transaction volume

From
Tom Lane
Date:
Alex Avriette <a_avriette@acs.org> writes:
> I'm intending to use postgres as a new backend for a server I am running.
> The throughput is roughly 8gb per day over 10,000 concurrent
> connections.

You will need to find a way of pooling those connections; I doubt you
really want to have 10000 backend processes running at once, do you?

> ... I'm using perl's POE, so there could conceivably be
> several dozen to even a hundred or more concurrent queries.

A hundred or so concurrent operations seems perfectly reasonable, given
that you're using some serious iron.  But I think you want a hundred
active backends, not a hundred active ones and 9900 idle ones.

            regards, tom lane