Thread: slow pg_connect()
Hi, I'm uning postgres 8.1 at P4 2.8GHz with 2GB RAM. (web server + database on the same server) Please, how long takes your connectiong to postgres? $starttimer=time()+microtime(); $dbconn = pg_connect("host=localhost port=5432 dbname=xxx user=xxx password=xxx") or die("Couldn't Connect".pg_last_error()); $stoptimer = time()+microtime(); echo "Generated in ".round($stoptimer-$starttimer,4)." s"; It takes more then 0.05s :( Only this function reduce server speed max to 20request per second. Than you for any Help! Best regards.
firerox@centrum.cz wrote: > It takes more then 0.05s :( > > Only this function reduce server speed max to 20request per second. > If you need that sort of frequent database access, you might want to look into: - Doing more work in each connection and reducing the number of connections required; - Using multiple connections in parallel; - Pooling connections so you don't need to create a new one for every job; - Using a more efficient database connector and/or language; - Dispatching requests to a persistent database access provider that's always connected However, your connections are indeed taking a long time. I wrote a trivial test using psycopg for Python and found that the following script: #!/usr/bin/env python import psycopg conn = pyscopg.connect("dbname=testdb") generally took 0.035 seconds (350ms) to run on my workstation - including OS process creation, Python interpreter startup, database interface loading, connection, disconnection, and process termination. A quick timing test shows that the connection/disconnection can be performed 100 times in 1.2 seconds: import psycopg import timeit print timeit.Timer('conn = psycopg.connect("dbname=craig")', 'import psycopg').timeit(number=100); ... and this is still with an interpreted language. I wouldn't be too surprised if much better again could be achieved with the C/C++ APIs, though I don't currently feel the desire to write a test for that. -- Craig Ringer
Craig Ringer wrote: > firerox@centrum.cz wrote: >> It takes more then 0.05s :( >> >> Only this function reduce server speed max to 20request per second. >> > If you need that sort of frequent database access, you might want to > look into: > > - Doing more work in each connection and reducing the number of > connections required; > - Using multiple connections in parallel; > - Pooling connections so you don't need to create a new one for every > job; > - Using a more efficient database connector and/or language; > - Dispatching requests to a persistent database access provider that's > always connected > Oh, I missed: Use a UNIX domain socket rather than a TCP/IP local socket. Database interfaces that support UNIX sockets (like psycopg) will normally do this if you omit the host argument entirely. -- Craig Ringer
> > It takes more then 0.05s :( > > Only this function reduce server speed max to 20request per second. First, benchmarking using only PHP is not very accurate, you're probably also measuring some work that PHP needs to do just to get started in the first place. Second, this 20r/s is not requests/sec but connections per second per PHP script. One pageview in PHP needs one connection, so it will delay the pageview by 0.05 seconds. If you need raw speed, you can use pg_pconnect(), but be VERY carefull because that will keep one databaseconnection open for every database for every webserverprocess. If you have 10 databasedriven websites running on the same webserver and that server is configured to run 100 processes at the same time, you will get 10x100=1000 open connections, which eats more RAM than you have.
firerox@centrum.cz wrote: > Hi, > > I'm uning postgres 8.1 at P4 2.8GHz with 2GB RAM. > (web server + database on the same server) > > Please, how long takes your connectiong to postgres? > > It takes more then 0.05s :( > > Only this function reduce server speed max to 20request per second. I tried running the script a few times, and got substantially lower start up times than you are getting. I'm using 8.1.11 on Debian on a 2x Xeon CPU 2.40GHz with 3GB memory, so I don't think that would account for the difference. Generated in 0.0046 s Generated in 0.0036 s Generated in 0.0038 s Generated in 0.0037 s Generated in 0.0038 s Generated in 0.0037 s Generated in 0.0047 s Generated in 0.0052 s Generated in 0.005 s -- Tommy Gildseth
Hi, firerox@centrum.cz schrieb: > Please, how long takes your connectiong to postgres? > > $starttimer=time()+microtime(); > > $dbconn = pg_connect("host=localhost port=5432 dbname=xxx user=xxx password=xxx") > or die("Couldn't Connect".pg_last_error()); > > $stoptimer = time()+microtime(); > echo "Generated in ".round($stoptimer-$starttimer,4)." s"; > > It takes more then 0.05s :( > > Only this function reduce server speed max to 20request per second. Two hints: * Read about configuring and using persistent database connections (http://www.php.net/manual/en/function.pg-pconnect.php) with PHP * Use a connection pooler such as pgpool-II (http://pgpool.projects.postgresql.org/) Using both techniques together should boost your performance. Ciao, Thomas
> * Read about configuring and using persistent database connections > (http://www.php.net/manual/en/function.pg-pconnect.php) with PHP Though make sure you understand the ramifications of using persistent connections. You can quickly exhaust your connections by using this and also cause other issues for your server. If you do this you'll probably have to adjust postgres to allow more connections, which usually means lowering the amount of shared memory each connection can use which can also cause performance issues. I'd probably use pgpool-II and have it handle the connection stuff for you rather than doing it through php. -- Postgresql & php tutorials http://www.designmagick.com/