Thread: Sending several commands simultaneously to PostgreSQL 8.4
Hello, I run a card game written in Perl on a CentOS 6.3 / 64 bit + PostgreSQL 8.4.13 where quite a lot player statistics are written to the d/b (score, game results, moves, etc.) Here a player profile: http://preferans.de/DE11198 The machine has a quad intel + 32 GB RAM. In poostgresql.conf I set: max_connections = 100 shared_buffers = 4096MB work_mem = 32MB log_min_duration_statement = 10000 I also use pgbouncer (for PHP scripts), but my Perl game daemon talks directly to /tmp/.s.PGSQL.5432 My game daemon runs in a non-forking loop and poll()s TCP sockets to the player machines.. Players complain about my server freezing for few seconds sometimes and I can see it myself in the game logs - when data is sometimes written to d/b (and postmaster processes take 90% CPU). So my question is: do I have to program a separate daemon - which would be polled via a Unix domain socket by my main game daemon and which would handle sending SQL commands (typically insert's and select's)? Or does such a generic daemon exist already? Or can multiple commands be sent to the PostgreSQL simultaneously? Here is how my Perl daemon talks to to the d/b usually (this happens inside of the poll() loop and thus stops everything for ms): sub logout($) { my $user = shift; my $id = $user->{ID}; eval { my $dbh = DBI->connect_cached(DSN, DBUSER, DBPASS, { AutoCommit => 1, PrintWarn => 1, PrintError => 1, RaiseError => 1, FetchHashKeyName => 'NAME_lc', pg_enable_utf8 => 1 }); if ($user->{SCORE}) { my $sth_money = $dbh->prepare_cached( q{select pref_update_money(?, ?)}); $sth_money->execute($id, $user->{SCORE}); $user->{SCORE} = 0; } my $sth_logout = $dbh->prepare_cached( q{update pref_users set logout=now() where id=?}); $sth_logout->execute($id); }; warn $@ if $@; } Thanks for any ideas Alex
To make my question more concrete: if I'd like to round-robin 6 PostgreSQL connections from my Perl script - how should I change my code: eval { my $dbh = DBI->connect_cached(DSN, DBUSER, DBPASS, { AutoCommit => 1, PrintWarn => 1, PrintError => 1, RaiseError => 1, FetchHashKeyName => 'NAME_lc', pg_enable_utf8 => 1 }); ..........SQL commands....... }; warn $@ if $@; } I.e. I don't understand how to combine this with the DBI connect_cached() call? How to make it return different connections? Thanks Alex
I would like to add a private "key" to make my dbh's different throughout my script: eval { my $dbh = DBI->connect_cached(DSN, DBUSER, DBPASS, { AutoCommit => 1, MY_PRIVATE_KEY => __FILE__.__LINE__, ### <-- HERE PrintWarn => 1, PrintError => 1, RaiseError => 1, FetchHashKeyName => 'NAME_lc', pg_enable_utf8 => 1 }); ..........SQL commands....... }; warn $@ if $@; } As sugested by http://search.cpan.org/~timb/DBI-1.616/DBI.pm#connect_cached But how could I check - if this has worked at my server and spawned more connections through the /tmp/.s.PGSQL.5432 (versus the pg_bouncer connections for PHP-scripts through /tmp/.s.PGSQL.6432) ? I've tried: # select * from pg_stat_activity where usename='XXXXX'; but the client_port is null there (because I use Unix sockets?) Thanks Alex
I've ended up calling this procedure in the loop before poll() - to run queued commands from an array of hashes: sub execSql { eval { my $dbh = DBI->connect_cached(DSN, DBUSER, DBPASS, { private_key => __FILE__, AutoCommit => 1, PrintWarn => 1, PrintError => 1, RaiseError => 1, FetchHashKeyName => 'NAME_lc', pg_enable_utf8 => 1 }); if ($WaitSql && $dbh->pg_ready) { $WaitSql = 0; $dbh->pg_result(); } if (!$WaitSql && @Sqls) { $WaitSql = 1; my $sql = shift @Sqls; my $cmd = $sql->{CMD}; my $args = $sql->{ARGS}; my $sth = $dbh->prepare_cached($cmd, { pg_async => PG_ASYNC + PG_OLDQUERY_WAIT }); $sth->execute(@$args); } }; warn $@ if $@; }
On Monday, January 21, 2013, Alexander Farber wrote:
Hello,
I run a card game written in Perl on
a CentOS 6.3 / 64 bit + PostgreSQL 8.4.13
where quite a lot player statistics are written
to the d/b (score, game results, moves, etc.)
Here a player profile: http://preferans.de/DE11198
The machine has a quad intel + 32 GB RAM.
In poostgresql.conf I set:
max_connections = 100
shared_buffers = 4096MB
work_mem = 32MB
log_min_duration_statement = 10000
I also use pgbouncer (for PHP scripts),
but my Perl game daemon talks
directly to /tmp/.s.PGSQL.5432
Why not have Perl go through pgbouncer as well?
My game daemon runs in a non-forking loop
and poll()s TCP sockets to the player machines..
Players complain about my server
freezing for few seconds sometimes
and I can see it myself in the game logs -
when data is sometimes written to d/b
(and postmaster processes take 90% CPU).
Any idea what causes that? Your code only seems to do anything with the database at the time that someone logs out. Does everyone log out at the same time?
So my question is:
do I have to program a separate daemon -
which would be polled via a Unix domain
socket by my main game daemon and
which would handle sending SQL commands
(typically insert's and select's)?
One alternative possibility would be to use synchronous_commit=off. This opens up the possibility that transactions would be lost in the case of a crash. But you change your code to send off updates and not wait for a response, as you seem to be proposing, then you are also introducing that possibility, just implicitly.
Cheers,
Jeff
On Monday, January 21, 2013, Alexander Farber wrote:
To make my question more concrete:
if I'd like to round-robin 6 PostgreSQL connections
from my Perl script - how should I change my code:
But what would that accomplish? If your server is constipated on the IO channel, all 6 connections will suffer the same.
Cheers,
Jeff