Re: Fastest way to clone schema ~1000x - Mailing list pgsql-general

From Emiel Mols
Subject Re: Fastest way to clone schema ~1000x
Date
Msg-id CAF5w505X-OVN_EW9nsOYjBPLQ1auAdcDLKzreCeY5TO_YJEAtA@mail.gmail.com
Whole thread Raw
In response to Re: Fastest way to clone schema ~1000x  (Daniel Gustafsson <daniel@yesql.se>)
Responses Re: Fastest way to clone schema ~1000x
List pgsql-general
On Mon, Feb 26, 2024 at 3:50 PM Daniel Gustafsson <daniel@yesql.se> wrote:
There is a measurable overhead in connections, regardless of if they are used
or not.  If you are looking to squeeze out performance then doing more over
already established connections, and reducing max_connections, is a good place
to start.

Clear, but with database-per-test (and our backend setup), it would have been *great* if we could have switched database on the same connection (similar to "USE xxx" in mysql). That would limit the connections to the amount of workers, not multiplied by tests.

Even with a pooler, we're still going to be maintaining 1000s of connections from the backend workers to the pooler. I would expect this to be rather efficient, but still unnecessary. Also, both pgbouncer/pgpool don't seem to support switching database in-connection (they could have implemented the aforementioned "USE" statement I think!). [Additionally we're using PHP that doesn't seem to have a good shared memory pool implementation -- pg_pconnect is pretty buggy].

I'll continue with some more testing. Thanks for now!

pgsql-general by date:

Previous
From: Daniel Gustafsson
Date:
Subject: Re: Fastest way to clone schema ~1000x
Next
From: veem v
Date:
Subject: Re: Performance issue debugging