Re: [HACKERS] CVS target for docs - Mailing list pgsql-hackers

From Tom Lane
Subject Re: [HACKERS] CVS target for docs
Date
Msg-id 7326.922030828@sss.pgh.pa.us
Whole thread Raw
In response to Re: [HACKERS] CVS target for docs  (Michael Meskes <meskes@postgresql.org>)
Responses Re: [HACKERS] CVS target for docs
List pgsql-hackers
> I'm currently thinking about moving to cvs completely but wonder how much
> more network traffic this will cause.

FWIW, I've been using remote cvs from my home machine and it seems to
work very well, and reasonably speedily.  I ran a "cvs update" on the
Postgres tree just now, while watching hub's CPU load via "top" in
another window.  Elapsed time was 2m 45s, and the server's CPU usage
on hub never got above 3%.  This run only had to pull a couple of files,
since I'd just updated yesterday --- a typical run probably takes more
like 4m or so.  Network bandwidth doesn't seem to be the limiting factor
in an update (to judge from das blinkenlights on my router), though it
is the bottleneck in a full checkout.

If what you're currently doing is cvs or cvsup into a local directory
at hub, then transferring the files to home via tar and ftp, I've got
to think that remote cvs is a vastly more efficient and less error-prone
solution.

BTW, I recommend puttingcvs -z3update -d -Pcheckout -P
in your ~/.cvsrc.  The first of these invokes gzip -3 compression for
all cvs network transfers; that should take care of bandwidth problems.
The other two make the default handling of subdirectories more
reasonable.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Thomas Lockhart
Date:
Subject: Re: [HACKERS] Binary-compatible types vs. overloaded operators
Next
From: Oleg Broytmann
Date:
Subject: Re: [HACKERS] VACUUM ANALYZE problem on linux