Thread: RE: I remember why I suggested CREATE FUNCTION...AS NUL L

RE: I remember why I suggested CREATE FUNCTION...AS NUL L

From
Magnus Hagander
Date:
> > 2) Change pg_dump to walk through dependencies?
> 
> The trouble with that is that dependency analysis is a monstrous job,
> and one that would make pg_dump even more fragile and backend-version-
> dependent than it is now.  

One way to get around that might be to make the dumping routine a part
of the backend instead of a frontend. This is what at least MS SQL does.
So I can do for example:

BACKUP DATABASE mydb TO DISK 'c:\foo.dump'
or, if I want to send the backup directly to my backup program
BACKUP DATABASE mydb TO PIPE 'somepipe'

Then to reload it I just do
RESTORE DATABASE mydb FROM DISK 'c:\foo.dump'


Doing this might also help with permissions issues, since the entire
process can be run inside tbe backend (and skip security checks at some
points, assuming that it was a user with backup permissions who started
the operation)?


//Magnus


RE: I remember why I suggested CREATE FUNCTION...AS NUL L

From
The Hermit Hacker
Date:
On Mon, 11 Sep 2000, Magnus Hagander wrote:

> > > 2) Change pg_dump to walk through dependencies?
> > 
> > The trouble with that is that dependency analysis is a monstrous job,
> > and one that would make pg_dump even more fragile and backend-version-
> > dependent than it is now.  
> 
> One way to get around that might be to make the dumping routine a part
> of the backend instead of a frontend. This is what at least MS SQL does.
> So I can do for example:
> 
> BACKUP DATABASE mydb TO DISK 'c:\foo.dump'
> or, if I want to send the backup directly to my backup program
> BACKUP DATABASE mydb TO PIPE 'somepipe'
> 
> Then to reload it I just do
> RESTORE DATABASE mydb FROM DISK 'c:\foo.dump'
> 
> 
> Doing this might also help with permissions issues, since the entire
> process can be run inside tbe backend (and skip security checks at some
> points, assuming that it was a user with backup permissions who started
> the operation)?

One issue with this comes to mind ... if I allocate X meg for a database,
given Y meg extra for temp tables, pg_sort, etc ... if someone has the
ability to dump to the server itself (assuming  server is seperate from
client machine), doesn't that run a major risk of filling up disk space
awfully quick?  For instance, the dba for that database decides to backup
before and after making changes, or before/after a large update ... what
sorts of checks/balances are you proposing to prevent disk space problems?  

also,  this is gonig to require database level access controls,
no?  something we don't have right now ... so that too is going to have to
be  implemented ... basically, you're gonna want some sort of 'GRANT
BACKUP to dba;' command so that there can be more then one person with
complete access to the database, but only one person with access to
initial a backup ...

Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
Systems Administrator @ hub.org 
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org