Thread: async queries in Perl and poll()/select() loop - how to make them work together?

async queries in Perl and poll()/select() loop - how to make them work together?

From
Alexander Farber
Date:
Hello,

I'm using CentOS 5.5 Linux with stock perl v5.8.8
and have installed DBD-Pg-2.17.1 via CPAN shell
and I'm using postgresql-server-8.4.5-1PGDG.rhel5.

I've prepared a simple test case demonstrating
my problem - it is listed at the bottom. It works ok,
when I remove {pg_async => PG_ASYNC}.

I have a small multiplayer game, a non-forking daemon
reading/writing to sockets and running in a IO::Poll loop.

I'd like to add player statistics to it, so I was hoping
that I could call the simple INSERT/UPDATE
statements asynchronously from the loop and
I don't need to fetch any results of those queries,
because displaying statistics will be done by
web scripts, not by my game daemon.

I.e. I would like to "fire and forget" queries.

But unfortunately I get the error:
DBD::Pg::st execute failed: Cannot execute
until previous async query has finished
even though I'm not using PG_OLDQUERY_WAIT

Does anybody know what is wrong and
how would you use async queries with
poll()/select() loops anyway?

I can't even call "$dbh->pg_result if $sth->pg_ready",
on every loop iteration, because I can have
several queries running at that moment,
I don't want to iterate through a list of my $sth's...
This defeats my target of quick poll()-looping.

Regards
Alex

P.S. Here is my test case and it is also listed at

http://stackoverflow.com/questions/4065758/async-queries-with-dbdpg-fail-with-cannot-execute-until-previous-async-query-h

#!/usr/bin/perl -w

use strict;
use DBI;
use DBD::Pg qw(:async);

use constant DBNAME => 'snake';
use constant DBUSER => 'snake';
use constant DBPASS => 'snake';

use constant SQL_CREATE_TABLES => q{
        /*
        create table pref_users (
                id varchar(32) primary key,
                first_name varchar(32),
                last_name varchar(32),
                female boolean,
                avatar varchar(128),
                city varchar(32),
                lat real check (-90 <= lat and lat <= 90),
                lng real check (-90 <= lng and lng <= 90),
                last_login timestamp default current_timestamp,
                last_ip inet,
                medals smallint check (medals > 0)
        );

        create table pref_rate (
                obj varchar(32) references pref_users(id),
                subj varchar(32) references pref_users(id),
                good boolean,
                fair boolean,
                nice boolean,
                about varchar(256),
                last_rated timestamp default current_timestamp
        );

        create table pref_money (
                id varchar(32) references pref_users,
                yw char(7) default to_char(current_timestamp, 'YYYY-WW'),
                money real
        );
        create index pref_money_yw_index on pref_money(yw);

        create table pref_pass (
                id varchar(32) references pref_users
        );

        create table pref_misere (
                id varchar(32) references pref_users
        );
        */

        create or replace function pref_update_users(_id varchar,
            _first_name varchar, _last_name varchar, _female boolean,
            _avatar varchar, _city varchar, _last_ip inet) returns
void as $BODY$
                begin

                update pref_users set
                    first_name = _first_name,
                    last_name = _last_name,
                    female = _female,
                    avatar = _avatar,
                    city = _city,
                    last_ip = _last_ip
                where id = _id;

                if not found then
                        insert into pref_users(id, first_name,
                            last_name, female, avatar, city, last_ip)
                        values (_id, _first_name, _last_name,
                            _female, _avatar, _city, _last_ip);
                end if;
                end;
        $BODY$ language plpgsql;
};

eval {
        my $dbh = DBI->connect_cached('dbi:Pg:dbname=' .
            DBNAME, DBUSER, DBPASS, {
            AutoCommit => 1,
            PrintWarn => 1,
            PrintError => 1,
            RaiseError => 1,
            FetchHashKeyName => 'NAME_lc',
            pg_enable_utf8 => 1
        }, {pg_async => PG_ASYNC});

        $dbh->do(SQL_CREATE_TABLES, {pg_async => PG_ASYNC});
};
warn $@ if $@;

for my $i (1..10) {
        eval {
                my $dbh = DBI->connect_cached('dbi:Pg:dbname=' .
                    DBNAME, DBUSER, DBPASS, {
                    AutoCommit => 1,
                    PrintWarn => 1,
                    PrintError => 1,
                    RaiseError => 1,
                    FetchHashKeyName => 'NAME_lc',
                    pg_enable_utf8 => 1
                }, {pg_async => PG_ASYNC});

                #$dbh->pg_result;

                my $sth = $dbh->prepare_cached(
                    q{select pref_update_users(?, ?, ?, ?, ?, ?,
NULL)}, {pg_async => PG_ASYNC});

                $sth->execute('ID123', 'Alexander', 'Farber', undef,
undef, undef);
        };
        warn $@ if $@;
}

On 11/1/2010 4:29 AM, Alexander Farber wrote:
> Hello,
>
> I'm using CentOS 5.5 Linux with stock perl v5.8.8
> and have installed DBD-Pg-2.17.1 via CPAN shell
> and I'm using postgresql-server-8.4.5-1PGDG.rhel5.
>
> I've prepared a simple test case demonstrating
> my problem - it is listed at the bottom. It works ok,
> when I remove {pg_async =>  PG_ASYNC}.
>
> I have a small multiplayer game, a non-forking daemon
> reading/writing to sockets and running in a IO::Poll loop.
>
> I'd like to add player statistics to it, so I was hoping
> that I could call the simple INSERT/UPDATE
> statements asynchronously from the loop and
> I don't need to fetch any results of those queries,
> because displaying statistics will be done by
> web scripts, not by my game daemon.
>
> I.e. I would like to "fire and forget" queries.
>
> But unfortunately I get the error:
> DBD::Pg::st execute failed: Cannot execute
> until previous async query has finished
> even though I'm not using PG_OLDQUERY_WAIT
>
> Does anybody know what is wrong and
> how would you use async queries with
> poll()/select() loops anyway?
>
> I can't even call "$dbh->pg_result if $sth->pg_ready",
> on every loop iteration, because I can have
> several queries running at that moment,
> I don't want to iterate through a list of my $sth's...
> This defeats my target of quick poll()-looping.
>
> Regards
> Alex
>
> P.S. Here is my test case and it is also listed at
>
http://stackoverflow.com/questions/4065758/async-queries-with-dbdpg-fail-with-cannot-execute-until-previous-async-query-h
>
> #!/usr/bin/perl -w
>
> use strict;
> use DBI;
> use DBD::Pg qw(:async);
>
> use constant DBNAME =>  'snake';
> use constant DBUSER =>  'snake';
> use constant DBPASS =>  'snake';
>
> use constant SQL_CREATE_TABLES =>  q{
>          /*
>          create table pref_users (
>                  id varchar(32) primary key,
>                  first_name varchar(32),
>                  last_name varchar(32),
>                  female boolean,
>                  avatar varchar(128),
>                  city varchar(32),
>                  lat real check (-90<= lat and lat<= 90),
>                  lng real check (-90<= lng and lng<= 90),
>                  last_login timestamp default current_timestamp,
>                  last_ip inet,
>                  medals smallint check (medals>  0)
>          );
>
>          create table pref_rate (
>                  obj varchar(32) references pref_users(id),
>                  subj varchar(32) references pref_users(id),
>                  good boolean,
>                  fair boolean,
>                  nice boolean,
>                  about varchar(256),
>                  last_rated timestamp default current_timestamp
>          );
>
>          create table pref_money (
>                  id varchar(32) references pref_users,
>                  yw char(7) default to_char(current_timestamp, 'YYYY-WW'),
>                  money real
>          );
>          create index pref_money_yw_index on pref_money(yw);
>
>          create table pref_pass (
>                  id varchar(32) references pref_users
>          );
>
>          create table pref_misere (
>                  id varchar(32) references pref_users
>          );
>          */
>
>          create or replace function pref_update_users(_id varchar,
>              _first_name varchar, _last_name varchar, _female boolean,
>              _avatar varchar, _city varchar, _last_ip inet) returns
> void as $BODY$
>                  begin
>
>                  update pref_users set
>                      first_name = _first_name,
>                      last_name = _last_name,
>                      female = _female,
>                      avatar = _avatar,
>                      city = _city,
>                      last_ip = _last_ip
>                  where id = _id;
>
>                  if not found then
>                          insert into pref_users(id, first_name,
>                              last_name, female, avatar, city, last_ip)
>                          values (_id, _first_name, _last_name,
>                              _female, _avatar, _city, _last_ip);
>                  end if;
>                  end;
>          $BODY$ language plpgsql;
> };
>
> eval {
>          my $dbh = DBI->connect_cached('dbi:Pg:dbname=' .
>              DBNAME, DBUSER, DBPASS, {
>              AutoCommit =>  1,
>              PrintWarn =>  1,
>              PrintError =>  1,
>              RaiseError =>  1,
>              FetchHashKeyName =>  'NAME_lc',
>              pg_enable_utf8 =>  1
>          }, {pg_async =>  PG_ASYNC});
>
>          $dbh->do(SQL_CREATE_TABLES, {pg_async =>  PG_ASYNC});
> };
> warn $@ if $@;
>
> for my $i (1..10) {
>          eval {
>                  my $dbh = DBI->connect_cached('dbi:Pg:dbname=' .
>                      DBNAME, DBUSER, DBPASS, {
>                      AutoCommit =>  1,
>                      PrintWarn =>  1,
>                      PrintError =>  1,
>                      RaiseError =>  1,
>                      FetchHashKeyName =>  'NAME_lc',
>                      pg_enable_utf8 =>  1
>                  }, {pg_async =>  PG_ASYNC});
>
>                  #$dbh->pg_result;
>
>                  my $sth = $dbh->prepare_cached(
>                      q{select pref_update_users(?, ?, ?, ?, ?, ?,
> NULL)}, {pg_async =>  PG_ASYNC});
>
>                  $sth->execute('ID123', 'Alexander', 'Farber', undef,
> undef, undef);
>          };
>          warn $@ if $@;
> }
>

I believe one database connection can have one async query going at a time.

I dont see anyplace in the docs that connect (or connect_cached)
supports PG_ASYNC.

Each iteration of your loop is blowing away the previous values, which
should cause problems.  I assume this is just test code?  Is your real
code really going to connection 10 times per person?  You wont be able
to support very many concurrent users that way.  The code above might
work if you switched it arrays (@dbh and @sth).

Async queries gives you the ability to fire one query, let the db work
on it while you do something else, and them come back to it.  You need
to think about your layout (cuz I'm betting your example code does not
reflect what you really want to do).

Even with async querys, you eventually have to call $dbh->pg_result, so
its not going to be fire and forget.  To really do fire and forget, and
totally take the stats processing away from game play processing, I'd
suggest an event queue (or rpc), like zeromq, PGQ or gearman.

-Andy

Re: async queries in Perl and poll()/select() loop - how to make them work together?

From
Alexander Farber
Date:
Hello Andy and others,

On Mon, Nov 1, 2010 at 3:33 PM, Andy Colson <andy@squeakycode.net> wrote:
> On 11/1/2010 4:29 AM, Alexander Farber wrote:
>> I have a small multiplayer game, a non-forking daemon
>> reading/writing to sockets and running in a IO::Poll loop.
>>
>> I.e. I would like to "fire and forget" queries.
>>
>> But unfortunately I get the error:
>> DBD::Pg::st execute failed: Cannot execute
>> until previous async query has finished
>> even though I'm not using PG_OLDQUERY_WAIT

> I believe one database connection can have one async query going at a time.

why are there 3 contants
http://search.cpan.org/dist/DBD-Pg/Pg.pm#Asynchronous_Constants
then? They suggest you can fire a query and forget

> I dont see anyplace in the docs that connect (or connect_cached) supports
> PG_ASYNC.

True, I've removed it (the problem still persists).

> Each iteration of your loop is blowing away the previous values, which
> should cause problems.  I assume this is just test code?  Is your real code
> really going to connection 10 times per person?  You wont be able to support
> very many concurrent users that way.  The code above might work if you
> switched it arrays (@dbh and @sth).

No I just need one connection, because I have
1 process (without any forked processes or threads),
which loops in a poll() loop.

> Async queries gives you the ability to fire one query, let the db work on it
> while you do something else, and them come back to it.  You need to think
> about your layout (cuz I'm betting your example code does not reflect what
> you really want to do).
>
> Even with async querys, you eventually have to call $dbh->pg_result, so its
> not going to be fire and forget.  To really do fire and forget, and totally
> take the stats processing away from game play processing, I'd suggest an
> event queue (or rpc), like zeromq, PGQ or gearman.

Thanks I'll look at it or maybe I'll fork 1 more process,
and open a pipe to it (then I can poll() it too).

Regards
Alex

On 11/1/2010 11:58 AM, Alexander Farber wrote:
> Hello Andy and others,
>
> On Mon, Nov 1, 2010 at 3:33 PM, Andy Colson<andy@squeakycode.net>  wrote:
>> On 11/1/2010 4:29 AM, Alexander Farber wrote:
>>> I have a small multiplayer game, a non-forking daemon
>>> reading/writing to sockets and running in a IO::Poll loop.
>>>
>>> I.e. I would like to "fire and forget" queries.
>>>
>>> But unfortunately I get the error:
>>> DBD::Pg::st execute failed: Cannot execute
>>> until previous async query has finished
>>> even though I'm not using PG_OLDQUERY_WAIT
>
>> I believe one database connection can have one async query going at a time.
>
> why are there 3 contants
> http://search.cpan.org/dist/DBD-Pg/Pg.pm#Asynchronous_Constants
> then? They suggest you can fire a query and forget
>
>> I dont see anyplace in the docs that connect (or connect_cached) supports
>> PG_ASYNC.
>
> True, I've removed it (the problem still persists).
>
>> Each iteration of your loop is blowing away the previous values, which
>> should cause problems.  I assume this is just test code?  Is your real code
>> really going to connection 10 times per person?  You wont be able to support
>> very many concurrent users that way.  The code above might work if you
>> switched it arrays (@dbh and @sth).
>
> No I just need one connection, because I have
> 1 process (without any forked processes or threads),
> which loops in a poll() loop.
>
>> Async queries gives you the ability to fire one query, let the db work on it
>> while you do something else, and them come back to it.  You need to think
>> about your layout (cuz I'm betting your example code does not reflect what
>> you really want to do).
>>
>> Even with async querys, you eventually have to call $dbh->pg_result, so its
>> not going to be fire and forget.  To really do fire and forget, and totally
>> take the stats processing away from game play processing, I'd suggest an
>> event queue (or rpc), like zeromq, PGQ or gearman.
>
> Thanks I'll look at it or maybe I'll fork 1 more process,
> and open a pipe to it (then I can poll() it too).
>
> Regards
> Alex
>

Consider the Pg architecture:  On the server a postmaster runs,
listening for connections.  On the client, you connect to the server.
The postmaster will spin up a child process to handle the new
connection.  One postmaster child processes one client connection, and
it can only do one query at a time.

So:
Postmaster
  |
  |--> child 1
  |--> child 2

Each child runs one query at a time.  Your client program has two options:
1) fire off a query and wait for the response and collect it.
2) fire off a query, do something else for a bit, collect the response.


 > why are there 3 contants
 > http://search.cpan.org/dist/DBD-Pg/Pg.pm#Asynchronous_Constants
 > then? They suggest you can fire a query and forget

I'm not sure what you mean fire and forget.  To me, I'd say no because
you have to collect the results at some point via $dbh->pg_result.
(Even if you fire an update or insert, I think you still have to "finish
off the process" via $dbh->pg_result)

I dont think you can start a second query until you have called
$dbh->pg_result.  These constants just give you neat ways of waiting...
its still just one at a time.

Our definitions of fire and forget might be different, and thats ok, but
in your example code, it looked to me like you wanted to run 10
simultaneous queries asynchronously, and that cannot be done without 10
separate database connections.  One connection can only run one query at
a time.

You still have the option, however, of using async queries in your game,
for example:

code to calc stats...
start query to update db stats
code to process game play, etc
finish off the db stats query
final bit of game code and respond to player... etc

-Andy



Re: async queries in Perl and poll()/select() loop - how to make them work together?

From
Merlin Moncure
Date:
On Mon, Nov 1, 2010 at 1:31 PM, Andy Colson <andy@squeakycode.net> wrote:
> I dont think you can start a second query until you have called
> $dbh->pg_result.  These constants just give you neat ways of waiting... its
> still just one at a time.

Correct.  The C api also supports the ability to test if getting the
result would 'block'...meaning wait for the server generated result
because the client doesn't have it yet.

Asynchronous queries give you a neat way to wait on the server or do a
bit of work while a previous query is executing without dealing with
the headache of threads.  You can't overlap queries on a single
connection because the server doesn't support it.  You could however
create (a very small number of, like 2) multiple connections, keep
them open, and round robin them.  Forking is overkill.

merlin