Thread: Docs refreshed
I've just committed a bunch o' patches for the docs. Changes include: o a chapter on index cost estimating (indexcost.sgml) from Tom Lane. o a chapter on the PL/perl language from Mark Hollomon. o a chapter on queries and EXPLAIN from Tom Lane. o lots of other bits and pieces. One change was to separate the docs for PL/SQL, PL/TCL, and PL/perl into separate chapters, moving them to the User's Guide, and moving the "How to interface a language" to the Programmer's Guide. istm that these easy to use programming languages come close to being a user-accessible feature (hence placing them in the UG), much more so than, say, libpq. Comments? - Thomas (sent at Thu Mar 30 22:41 UTC 2000) -- Thomas Lockhart lockhart@alumni.caltech.edu South Pasadena, California
> (sent at Thu Mar 30 22:41 UTC 2000) Thu Mar 30 23:20 UTC 2000 So I'm seeing a current round trip of ~40 minutes. My recollection is that I would see round trips of ~5 minutes in the good old days. btw, I see fast turnaround for the committer's list. - Thomas -- Thomas Lockhart lockhart@alumni.caltech.edu South Pasadena, California
> Something to think about maybe. Yeah, I've thought about it, and it is not at all clear. I understand all of your points, but for the hardcopy versions of docs having a single 600 page doc seems more unwieldy than having several 200 page docs (yes, they *are* that big!!). Just as you, I assume that people using html read the integrated doc. btw, it is possible to mark up the docs so that you can, say, include cross references if it is html but include only citation references if it is hardcopy. So if we moved to having only the integrated doc in html, and only the smaller docs in hardcopy, then we could put more "clickable cross references" into the html. - Thomas -- Thomas Lockhart lockhart@alumni.caltech.edu South Pasadena, California
Thomas Lockhart writes: > > Something to think about maybe. > > Yeah, I've thought about it, and it is not at all clear. I understand > all of your points, but for the hardcopy versions of docs having a > single 600 page doc seems more unwieldy than having several 200 page > docs (yes, they *are* that big!!). > > Just as you, I assume that people using html read the integrated doc. Maybe we need a show of hands of how many people bother with the hardcopy. I think for most people anything beyond 20 pages would never get near the printer. At the point you reach 200 pages the extra 400 don't matter, the paper is going to be empty beforehand anyway. If you want to print something for reference you pick out the interesting pages (such as the reference pages). > btw, it is possible to mark up the docs so that you can, say, include > cross references if it is html but include only citation references if > it is hardcopy. So if we moved to having only the integrated doc in > html, and only the smaller docs in hardcopy, then we could put more > "clickable cross references" into the html. That's the next question I had for you. :) I'm just happy the stuff builds for me. But I don't think just linking stuff together is the answer. It's only working around an organizational problem. IMHO. Here's a thought: If we'd make it in "book" form like I suggested we would probably have about a dozen major chapters. That's 50 pages each which is much more printer friendly and you get to choose better. The only thing you'd have to do is split up the postscript into separate files at some stage. -- Peter Eisentraut Sernanders väg 10:115 peter_e@gmx.net 75262 Uppsala http://yi.org/peter-e/ Sweden
Thomas Lockhart writes: > Just as you, I assume that people using html read the integrated doc. Btw., the fact that the print docs are in US Letter format makes them slightly beyond useless for the rest of the world. :( I still think that a 200 page document is not any less unwieldy than a 600 page one. There's gotta be an option to only print pages x through y either way. Considering that this is pretty much what's holding up releases, would it be possible to consider not putting the postscript docs in the distribution and just put them on the ftp server at your convenience (and in A4 as well) for those who choose to get it? Not to break your heart or something but thinking practically ... :) Also don't put them in the CVS tree. They're just wasting space since they're out of date and not really useful for developers. In the same spirit I'd suggest not including the html tars in the CVS tree either. In the distribution I would like to have them *untarred* so users can browse them before/without installing. But for that they can be generated when making the distribution (make -C doc postgres.html, not too hard but we'd need to use a separate build dir), no need to keep out of date copies in CVS. What do other people think? It seems to me that many people just read the stuff on postgresql.org. -- Peter Eisentraut Sernanders väg 10:115 peter_e@gmx.net 75262 Uppsala http://yi.org/peter-e/ Sweden
Peter Eisentraut <peter_e@gmx.net> writes: > Also don't put them in the CVS tree. They're just wasting space since > they're out of date and not really useful for developers. > In the same spirit I'd suggest not including the html tars in the CVS tree > either. It's really pretty silly to have tar.gz files in the CVS tree. I can imagine what the underlying diff looks like every time they are updated :-(. And, since they are ultimately just derived files, I agree with Peter that they shouldn't be in CVS at all. They should, however, be in release tarballs. > In the distribution I would like to have them *untarred* so users > can browse them before/without installing. Doesn't matter a whole lot; you can untar them yourself if you want to do that. regards, tom lane
> It's really pretty silly to have tar.gz files in the CVS tree. I can > imagine what the underlying diff looks like every time they are updated > :-(. And, since they are ultimately just derived files, I agree with > Peter that they shouldn't be in CVS at all. Well, it wasn't pretty silly when I first did it, so have a little sense of history please ;) It was only the last year or so that the docs could get built on hub.org (the postgresql.org host). It still breaks occasionally if scrappy tries updating his machine, since the tools are only used by me so he wouldn't notice if something goes wrong. Previously, the docs had to be built on my machine at home, then downloaded (and home is still where all package development and debugging takes place). If they were to be recoverable *on* hub.org, they had to go into cvs. It may be that we could now generate them from scratch during the release tarball build, (it takes, maybe, 10-15 minutes to build all variations). But I would think you wouldn't want to do that, but would rather pick them up from a known location. cvs is where we do that now, but it could be from somewhere else I suppose. Perhaps our build script for the tarball could include a "wget" from a known location on postgresql.org? Vince is planning on redoing the web site a bit to decouple the release docs from the development docs. I'd like to have a "get the docs" page which gives us the release docs for each release and the current development docs, and we could have the tarball builder get the release docs for the upcoming release from there. Is this something for v7.1, or is there something important about this now?? - Thomas -- Thomas Lockhart lockhart@alumni.caltech.edu South Pasadena, California
> > Just as you, I assume that people using html read the integrated doc. > Btw., the fact that the print docs are in US Letter format makes them > slightly beyond useless for the rest of the world. :( I still think that a > 200 page document is not any less unwieldy than a 600 page one. There's > gotta be an option to only print pages x through y either way. <bad attitude> Hmm. I'm glad you appreciate the hundreds of hours of work I've put into the docs :/ And I haven't seen anyone else stepping up to providing hardcopy-style docs for "the rest of the world". And I personally value hardcopy docs any time I try to learn something (as opposed to just refreshing my memory on a detail) and think that it is important for others too. </bad attitude> > Considering that this is pretty much what's holding up releases, would it > be possible to consider not putting the postscript docs in the > distribution and just put them on the ftp server at your convenience (and > in A4 as well) for those who choose to get it? Not to break your heart or > something but thinking practically ... :) OK, there is a little known secret here: the docs are *not* holding up the release. But, the release will be held up until both docs *and* the release are ready to go, and at the moment neither are. In fact, a fundamental part of our release cycle is that, for the last couple of weeks before the actual release, the project is "waiting for docs", and during that time, the old adage that "idle hands do the devil's work" comes into play and people start poking at the release, trying things, putting it into production, maybe, trying it on all platforms, etc etc, and we find a few extra bugs and get our platform reports finalized. And all of this is essential for a quality release. So don't believe the docs story too much, but don't try doing away with the sham either ;) btw, I hadn't fully realized the above until you started poking at it, so thanks :) > Also don't put them in the CVS tree. They're just wasting space since > they're out of date and not really useful for developers. You haven't yet suggested enough other mechanisms to adequately replace the current scheme, but I'll do it for you, under separate cover sometime soon. > In the same spirit I'd suggest not including the html tars in the CVS tree > either. In the distribution I would like to have them *untarred* so users > can browse them before/without installing. But for that they can be > generated when making the distribution (make -C doc postgres.html, not too > hard but we'd need to use a separate build dir), no need to keep out of > date copies in CVS. Well, the copies *aren't* out of date for the corresponding release version, so that isn't the issue afaict. tar vs untar doesn't seem to be a big issue either, though the tarball is pretty much required if the html is in cvs since the names of the html files only occasionally reproduce from one rev to the next (note the large number of random generic names for internal portions of chapters). - Thomas -- Thomas Lockhart lockhart@alumni.caltech.edu South Pasadena, California