* Josh Berkus <josh@agliodbs.com> [080220 18:00]:
> All,
>
> I think we're failing to discuss the primary use-case for this, which
> is one reason why the solutions aren't obvious.
> However, imagine you're adminning 250 PostgreSQL servers backing a
> social networking application. You decide the application needs a
> higher default sort_mem for all new connections, on all 250 servers.
> How, exactly, do you deploy that?
>
> Worse, imagine you're an ISP and you have 250 *differently configured*
> PostgreSQL servers on vhosts, and you need to roll out a change in
> logging destination to all machines while leaving other settings
> untouched.
But, from my experience, those are "pretty much" solved, with things
like rsync, SCM (pick your favourite) and tools like "clusterssh,
multixterm", rancid, wish, expect, etc.
I would have thought that any "larger enterprise" was familiar with
these approaches, and are probably using them already to
manage/configure there general unix environments
> We need a server-based tool for the manipulating postgresql.conf, and
> one which is network-accessable, allows updating individual settings,
> and can be plugged into 3rd-party server management tools. This goes
> for pg_hba.conf as well, for the same reasons.
>
> If we want to move PostgreSQL into larger enterprises (and I certainly
> do) we need to make it more manageable.
Do we need to develop our own set of "remote management" tools/systems,
or possibly document some best practices using already available "multi-
server managment" tools?
--
Aidan Van Dyk Create like a god,
aidan@highrise.ca command like a king,
http://www.highrise.ca/ work like a slave.