I'm intending to use postgres as a new backend for a server I am running.
The throughput is roughly 8gb per day over 10,000 concurrent connections. At
the moment, the software in question is using complex hashes and b-trees. My
feeling was that the people who wrote postgres were more familiar with
complex data storage, and it would be faster to offload to postgres the task
of indexing files and whatnot. So its function would be as a
pseudo-filesystem with searching capabilities and also as a
userdb/authenticationdb. I'm using perl's POE, so there could conceivably be
several dozen to even a hundred or more concurrent queries. The amount of
data exchange in these queries would be very small. But over the course of a
day, it will add up to quite a bit. The server in question has a gig of ram
and sits on a T1.
At the moment, I use postgres for storing phenomenal amounts of data
(terabyte scale), but the transaction load is very small in comparison. (the
server I am migrating gets something like 6M - 9M hits/day)
Has anyone attempted to use postgres in this fashion? Are there steps I
should take here?
Thanks,
alex
--
alex j. avriette
perl hacker.
a_avriette@acs.org
$dbh -> do('unhose');