Thread: Connection pooling/sharing software help

Connection pooling/sharing software help

From
Kris Kiger
Date:
Here is the scenario I am running into:
    I have a cluster of databases.  I also have another cluster of
machines that will have to talk with some of the database machines.
 Rather than set up a connection to each databases from each of the
'talking' machines, is it possible to share a single connection to a
database between machines?  Does such a connection pooling/sharing
software exist?  Thanks in advance for the advice!

Kris



Re: Connection pooling/sharing software help

From
Kris Kiger
Date:
This looks exactly like what we want.  Currently we are using JDBC to
maintain persistent connections.  The advantage to this is that you can
use the JDBC driver written specifically for your version of Postgres
(level 1 or level 4, I forget which is the closest).  Do you have any
idea how close sqlrelay could be considered or if there is another
pooling software that fits the description, but fits into JDBC some how?
 I appreciate your help!

Kris

>> Here is the scenario I am running into:
>>    I have a cluster of databases.  I also have another cluster of
>> machines that will have to talk with some of the database machines.
>> Rather than set up a connection to each databases from each of the
>> 'talking' machines, is it possible to share a single connection to a
>> database between machines?  Does such a connection pooling/sharing
>> software exist?  Thanks in advance for the advice!
>
> Yep, called SQL Relay and can be found at
> http://sqlrelay.sourceforge.net/
>
> I find it especially handy with Oracle, as OCI connections incur
> pretty big overhead.
>
> -- Mitch



Large file support needed? Trying to identify root of error.

From
Kris Kiger
Date:
I've got a database that is a single table with 5 integers, a timestamp
with time zone, and a boolean.  The table is 170 million rows in length.
 The contents of the tar'd dump file it produced using:
    pg_dump -U postgres -Ft test > test_backup.tar
is: 8.dat (approximately 8GB), a toc, and restore.sql.

No errors are reported on dump, however, when a restore is attempted I get:

ERROR:  unexpected message type 0x58 during COPY from stdin
CONTEXT:  COPY test_table, line 86077128: ""
ERROR:  could not send data to client: Broken pipe
CONTEXT:  COPY test_table, line 86077128: ""

I am doing the dump & restore on the same machine.

Any ideas?  If the file is too large, is there anyway postgres could
break it up into smaller chunks for the tar when backing up?  Thanks for
the help!

Kris




Re: Large file support needed? Trying to identify root of error.

From
Tom Lane
Date:
Kris Kiger <kris@musicrebellion.com> writes:
> I've got a database that is a single table with 5 integers, a timestamp
> with time zone, and a boolean.  The table is 170 million rows in length.
>  The contents of the tar'd dump file it produced using:
>     pg_dump -U postgres -Ft test > test_backup.tar
> is: 8.dat (approximately 8GB), a toc, and restore.sql.

Try -Fc instead.  I have some recollection that tar format has a
hard-wired limit on the size of individual members.

            regards, tom lane

Re: Large file support needed? Trying to identify root of

From
"Scott Marlowe"
Date:
On Mon, 2004-07-19 at 13:28, Kris Kiger wrote:
> I've got a database that is a single table with 5 integers, a timestamp
> with time zone, and a boolean.  The table is 170 million rows in length.
>  The contents of the tar'd dump file it produced using:
>     pg_dump -U postgres -Ft test > test_backup.tar
> is: 8.dat (approximately 8GB), a toc, and restore.sql.
>
> No errors are reported on dump, however, when a restore is attempted I get:
>
> ERROR:  unexpected message type 0x58 during COPY from stdin
> CONTEXT:  COPY test_table, line 86077128: ""
> ERROR:  could not send data to client: Broken pipe
> CONTEXT:  COPY test_table, line 86077128: ""
>
> I am doing the dump & restore on the same machine.
>
> Any ideas?  If the file is too large, is there anyway postgres could
> break it up into smaller chunks for the tar when backing up?  Thanks for
> the help!

How, exactly, are you restoring?  Doing things like:

cat file|pg_restore ...

can cause problems because cat is often limited to 2 gigs on many OSes.
Just use a redirect:

psql dbname <file


Re: Large file support needed? Trying to identify root of

From
Kris Kiger
Date:
Thanks Tom, -Fc worked great!  I appreciate your help.

Kris

Tom Lane wrote:

>Kris Kiger <kris@musicrebellion.com> writes:
>
>
>>I've got a database that is a single table with 5 integers, a timestamp
>>with time zone, and a boolean.  The table is 170 million rows in length.
>> The contents of the tar'd dump file it produced using:
>>    pg_dump -U postgres -Ft test > test_backup.tar
>>is: 8.dat (approximately 8GB), a toc, and restore.sql.
>>
>>
>
>Try -Fc instead.  I have some recollection that tar format has a
>hard-wired limit on the size of individual members.
>
>            regards, tom lane
>
>---------------------------(end of broadcast)---------------------------
>TIP 6: Have you searched our list archives?
>
>               http://archives.postgresql.org
>
>