Tom Lane wrote:
>Bob Ippolito <bob@redivi.com> writes:
>
>
>>If you don't want to optimize the whole application, I'd at least
>>just push the DB operations down to a very small number of
>>connections (*one* might even be optimal!), waiting on some kind of
>>thread-safe queue for updates from the rest of the system.
>>
>>
>
>While I agree that hundreds of threads seems like overkill, I think the
>above advice might be going too far in the other direction. The problem
>with single-threaded operation is that any delay affects the whole
>system --- eg, if you're blocked waiting for disk I/O, the CPU doesn't
>get anything done either. You want enough DB connections doing things
>in parallel to make sure that there's always something else useful to do
>for each major component. This is particularly important for Postgres,
>which doesn't do any internal query parallelization (not that it would
>help much anyway for the sorts of trivial queries you are worried about).
>If you have, say, a 4-way CPU you want at least 4 active connections to
>make good use of the CPUs.
>
>I'd suggest trying to build the system so that it uses a dozen or two
>active database connections. If that doesn't match up to the number of
>polling activities you want to have in flight at any instant, then you
>can do something like what Bob suggested on the client side to bridge
>the gap.
>
>As far as the question "can PG do 1-2k xact/sec", the answer is "yes
>if you throw enough hardware at it". Spending enough money on the
>disk subsystem is the key ...
>
>
The 1-2k xact/sec for MySQL seems suspicious, sounds very much like
write-back cached, not write-through, esp. considering that heavy
concurrent write access isn't said to be MySQLs strength...
I wonder if preserving the database after a fatal crash is really
necessary, since the data stored sounds quite volatile; in this case,
fsync=false might be sufficient.
Regards,
Andreas