Thread: Programatically switching database
Hi, Suppose pgSql server is running databases "db1" and "db2" that are in the same db cluster. A client app establishes connection "conn1" to db1. Is there a way to programatically switch conn1 to use db2 without doing disconnect-from-db1-connect-to-db2? Something like what "\c" does but to be used independently from psql? I need this to be able to reuse a pool of connections to db1 for actions on db1, db2 ... dbn. Thanks __________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree
ow writes: > Is there a way to programatically switch conn1 to use db2 without doing > disconnect-from-db1-connect-to-db2? Something like what "\c" does but to > be used independently from psql? I need this to be able to reuse a pool > of connections to db1 for actions on db1, db2 ... dbn. Nothing prevents you from keeping the connection to db1 open when you open a connection to db2. By the way, psql's "\c" command does exactly disconnect-from-db1-connect-to-db2. -- Peter Eisentraut peter_e@gmx.net
--- Peter Eisentraut wrote: > Nothing prevents you from keeping the connection to db1 open when you open > a connection to db2. By the way, psql's "\c" command does exactly > disconnect-from-db1-connect-to-db2. That's the whole point: I'm trying to avoid maintaining *separate* connection pools for each db. In other words, instead of having, say, 5 connection pools to 5 dbs with total of 1000 connections, I could've used just one (1) pool with 200 connections, if there was a way to "switch db" (or to do a cross-db query). Any ideas? Thanks __________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree
ow writes: > That's the whole point: I'm trying to avoid maintaining *separate* connection > pools for each db. In other words, instead of having, say, 5 connection pools > to 5 dbs with total of 1000 connections, I could've used just one (1) pool with > 200 connections, if there was a way to "switch db" (or to do a cross-db query). I'm afraid that what you want to do is not possible. Perhaps you want to organize your data into schemas, not databases. -- Peter Eisentraut peter_e@gmx.net
--- Peter Eisentraut <peter_e@gmx.net> wrote: > I'm afraid that what you want to do is not possible. Perhaps you want to > organize your data into schemas, not databases. There's too much data to put it in one db. If anything happens to it, I'll never be able to restore (or dump) it in time. BTW, mySql has cross-db queries. Thanks __________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree
--- Peter Eisentraut <peter_e@gmx.net> wrote: > I'm afraid that what you want to do is not possible. Perhaps you want to > organize your data into schemas, not databases. There's too much data to put it in one db. If anything happens to it, I'll never be able to restore (or dump) it in time. BTW, mySql has cross-db queries. Thanks __________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree
ow writes: > There's too much data to put it in one db. There is never too much data to be put in one database. > If anything happens to it, I'll never be able to restore (or dump) it in > time. You could just dump individual schemas. > BTW, mySql has cross-db queries. PostgreSQL has schemas, that's the same thing. PostgreSQL's "databases" are not the same thing as MySQL's "databases". -- Peter Eisentraut peter_e@gmx.net
--- Peter Eisentraut <peter_e@gmx.net> wrote: > You could just dump individual schemas. How? The doc only mentions db: pg_dump [option...] [dbname] Then, how would I lock users out from the schema while it's being loaded? Thanks __________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree
--- ow <oneway_111@yahoo.com> wrote: > How? The doc only mentions db: pg_dump [option...] [dbname] > > Then, how would I lock users out from the schema while it's being loaded? Never mind how, I see there's "-n namespace" option in 7.4. But still, how would I lock users out from the schema while it's being loaded? Thanks __________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree
ow wrote: > --- ow <oneway_111@yahoo.com> wrote: >> How? The doc only mentions db: pg_dump [option...] [dbname] >> >> Then, how would I lock users out from the schema while it's being loaded? > > Never mind how, I see there's "-n namespace" option in 7.4. But still, how > would I lock users out from the schema while it's being loaded? #!/bin/sh ( echo "start transaction;" cat $2 echo "commit transaction;" ) psql $1 then call it as reload_in_transaction my_db my_namespace.dump Since the whole dump will be restored inside of one transaction, nobody will see it while it's reloading. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@Yahoo.com #
Jan Wieck wrote: > ow wrote: > >> --- ow <oneway_111@yahoo.com> wrote: >>> How? The doc only mentions db: pg_dump [option...] [dbname] >>> >>> Then, how would I lock users out from the schema while it's being loaded? >> >> Never mind how, I see there's "-n namespace" option in 7.4. But still, how >> would I lock users out from the schema while it's being loaded? > > #!/bin/sh > > ( > echo "start transaction;" > cat $2 > echo "commit transaction;" > ) psql $1 gosh, there's the pipe missing before the psql > > > > then call it as > > reload_in_transaction my_db my_namespace.dump > > Since the whole dump will be restored inside of one transaction, nobody > will see it while it's reloading. > > > Jan > -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@Yahoo.com #
--- Jan Wieck <JanWieck@yahoo.com> wrote: > #!/bin/sh > > ( > echo "start transaction;" > cat $2 > echo "commit transaction;" > ) psql $1 > > > > then call it as > > reload_in_transaction my_db my_namespace.dump > > Since the whole dump will be restored inside of one transaction, nobody > will see it while it's reloading. Interesting idea. I know some RDBMSs that are very picky about DDL in transactions but it appears pgSql handles it without any problems. My concern though ... wouldn't pgSql server collapse when faced with transaction spawning across 100M+ records? And how would that affect overall restore time? I've should've tried it myself but I can't, my "regular" pg_restore has been running for 4+ hours, can't kill now ... Thanks __________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree
ow <oneway_111@yahoo.com> writes: > My concern though ... wouldn't pgSql server collapse when faced with > transaction spawning across 100M+ records? No. You're extrapolating from Oracle-specific assumptions again. regards, tom lane
Clinging to sanity, tgl@sss.pgh.pa.us (Tom Lane) mumbled into her beard: > ow <oneway_111@yahoo.com> writes: >> My concern though ... wouldn't pgSql server collapse when faced with >> transaction spawning across 100M+ records? > > No. You're extrapolating from Oracle-specific assumptions again. Or from MySQL-specific assumptions :-). It seems reasonable (absent of particular knowledge to the contrary) that the size of a transaction might be _expected_ to be some sort of constraint; it is quite surprising that it isn't, and I don't think that's purely based on the mistake of assuming that the whole world does things exactly like Oracle. -- (format nil "~S@~S" "cbbrowne" "acm.org") http://cbbrowne.com/info/postgresql.html Rules of the Evil Overlord #89. "After I captures the hero's superweapon, I will not immediately disband my legions and relax my guard because I believe whoever holds the weapon is unstoppable. After all, the hero held the weapon and I took it from him." <http://www.eviloverlord.com/>
ow <oneway_111@yahoo.com> writes: > My concern though ... wouldn't pgSql server collapse when faced with > transaction spawning across 100M+ records? The number of records involved really doesn't faze Postgres at all. However the amount of time spent in the transaction could be an issue if there is other activity in other schemas of the same database. As long as the transaction is running none of the deleted or old updated data in any schema of the database can be cleaned up by vacuum as postgres thinks the big transaction "might" need to see it sometime. So if the rest of the database is still active the tables and indexes being updated may grow larger than normal. If it goes on for a _really_ long time they might need a VACUUM FULL at some point to clean them up. -- greg
Hi all I'm trying to optimise some sql inserts I have by using the COPY command instead. I'm connecting from Java using 7.4 beta 3. It works at the command line, but I can't get it to work from Java. Here's the pg log output. LOG: query: COPY bingo_cards (account_id, game_id, win_ball, card, current_call, state) FROM stdin with delimiter '|'; 34|50558|75|3,7,8,4,12,23,18,24,22,21,38,45,42,39,43,55,54,53,51,46,70,66,71,75,65|0|0 34|50558|68|4,8,2,13,11,26,23,30,19,18,39,35,31,36,43,55,60,48,59,58,72,68,65,71,66|0|0 34|50558|72|15,11,2,4,5,26,18,20,25,27,43,36,45,38,31,54,57,58,55,50,66,73,71,72,69|0|0 \. ERROR: syntax error at or near "34" at character 108 Thanks for any help Ritchie