Thread: Is a modern build system acceptable for older platforms
There have been several discussions of replacing PG's autoconf + src/tools/msvc system. The last example is happening now at the bottom of the Setting rpath on llvmjit.so thread.
I see potentially big advantages to moving but also to PG's conservative approach that keeps it running on edge and old platforms so I set to look more carefully what could be problematic or a showstopper for a more modern build system. Here are my findings, hope they help.Unlike autoconf, all newer alternatives that I know of (certainly CMake and Meson which were floated as alternatives so far) require themselves to be present on the build machine when building. I know they have good reasons to do this but that means they impose new dependencies for building PG. Let's see what those are for CMake and Meson to get an idea if that's acceptable and a feeling for how much friction they will introduce.
CMake
=====
* needs a C++11 compiler (since 3.10, before it used to only need C++98)
* needs libuv (since 3.10 apparently, I know that some years ago it had no library dependencies besides the C++ standard library)
* has a make backend so no new depedency (it maybe works with non GNU make so maybe it lowers one dependency)
* can bootstrap on a number of Unix systems, see https://gitlab.kitware.com/cmake/cmake/blob/master/bootstrap
For the platforms in "CMake's buildfarm" see https://open.cdash.org/index.php?project=CMake
The C++11 requirement caused 3.10 and higher to not build anymore for HP-UX:
https://gitlab.kitware.com/cmake/cmake/issues/17137
Meson
=====
* needs Python >= 3.4
* needs ninja
** meson has no make backend see http://mesonbuild.com/FAQ.html#why-is-there-not-a-make-backend for rationale
** as a small positive, this would mean not having to explain "you need GNU make, BSD make is not enough"
Ninja:
* needs C++
** I think C++98 is enough but not 100% sure, with a quick look at the code I noticed no newer C++ features and the bootstrap script does not pass any -std arguments to the C++ compiler so it should be 98
* https://github.com/ninja-build/ninja/pull/1007 talks about adding AIX support and is in a release already
* https://github.com/ninja-build/ninja/blob/master/configure.py is the bootstrap script which lists these as known platforms: 'linux', 'darwin', 'freebsd', 'openbsd', 'solaris', 'sunos5', 'mingw', 'msvc', 'gnukfreebsd', 'bitrig', 'netbsd', 'aix', 'dragonfly'
Python 3:
* points to ActivePython for HP-UX: https://www.python.org/download/other/
* some googling suggests Python > 3.2 works well on AIX and there are some links to binaries
If I look at the requirements above versus what Postgres has in src/template and in the build farm it seems like HP-UX and AIX could be the more problematic or at least fiddly ones.
A related issue is that future versions of CMake or Meson could move their baseline dependencies and desupport old platforms faster than PG might want to but there one could make the case to just use the older meson or CMake.
So before the discussion whether the gains from switching build systems would offset the pain, I think the project needs to decide whether a newer build system is acceptable in the first place as it has a chance of desupporting a platform alltogether or at least making it more painful for some platforms by adding the bootstrap step for the build system with potentially cascading dependencies (get Python 3 working, get ninja bootstrapped, get PG built or get libuv built, get CMake built, get PG built).
The above is all about getting the build system to work at all. If that isn't a showstopper there's a subsequent discussion to be had about older platforms where one could get the build system to work but convenient packages are missing. For example not even RHEL7 has any Python3 packages in the base system (it does in Software Collections though) which means some extra hoops on getting meson running there. And RHEL5 is in an even worse spot as it has no Software Collections, who knows if Python 3 builds on it out of the box etc.
About CMake:
We can use 3.9 version very well, it has all features for us, at least for my postgres_cmake branch and I have the experience to introduce features to cmake special for postgres build.
Also, cmake very easily build even for Solaris or AIX (on my webpage I have examples to build postgres with cmake on this systems).
But, I totally agree with this topic, we can't keep same support matrix and can't keep 100% same behavior and a "build" interface. Maybe, behavior should be the second question.
In my opinion, I made too much now without the answer to this important question.
2018-04-19 14:30 GMT+09:00 Catalin Iacob <iacobcatalin@gmail.com>:
There have been several discussions of replacing PG's autoconf + src/tools/msvc system. The last example is happening now at the bottom of the Setting rpath on llvmjit.so thread.I see potentially big advantages to moving but also to PG's conservative approach that keeps it running on edge and old platforms so I set to look more carefully what could be problematic or a showstopper for a more modern build system. Here are my findings, hope they help.
Unlike autoconf, all newer alternatives that I know of (certainly CMake and Meson which were floated as alternatives so far) require themselves to be present on the build machine when building. I know they have good reasons to do this but that means they impose new dependencies for building PG. Let's see what those are for CMake and Meson to get an idea if that's acceptable and a feeling for how much friction they will introduce.
CMake
=====
* needs a C++11 compiler (since 3.10, before it used to only need C++98)
* needs libuv (since 3.10 apparently, I know that some years ago it had no library dependencies besides the C++ standard library)
* has a make backend so no new depedency (it maybe works with non GNU make so maybe it lowers one dependency)
* can bootstrap on a number of Unix systems, see https://gitlab.kitware.com/cmake/cmake/blob/master/ bootstrap
For the platforms in "CMake's buildfarm" see https://open.cdash.org/index.php?project=CMake
The C++11 requirement caused 3.10 and higher to not build anymore for HP-UX:
https://gitlab.kitware.com/cmake/cmake/issues/17137
Meson
=====
* needs Python >= 3.4
* needs ninja
** meson has no make backend see http://mesonbuild.com/FAQ.html#why-is-there-not-a-make- backend for rationale
** as a small positive, this would mean not having to explain "you need GNU make, BSD make is not enough"
Ninja:
* needs C++
** I think C++98 is enough but not 100% sure, with a quick look at the code I noticed no newer C++ features and the bootstrap script does not pass any -std arguments to the C++ compiler so it should be 98
* https://github.com/ninja-build/ninja/pull/1007 talks about adding AIX support and is in a release already
* https://github.com/ninja-build/ninja/blob/master/ configure.py is the bootstrap script which lists these as known platforms: 'linux', 'darwin', 'freebsd', 'openbsd', 'solaris', 'sunos5', 'mingw', 'msvc', 'gnukfreebsd', 'bitrig', 'netbsd', 'aix', 'dragonfly'
Python 3:
* points to ActivePython for HP-UX: https://www.python.org/download/other/
* some googling suggests Python > 3.2 works well on AIX and there are some links to binaries
If I look at the requirements above versus what Postgres has in src/template and in the build farm it seems like HP-UX and AIX could be the more problematic or at least fiddly ones.
A related issue is that future versions of CMake or Meson could move their baseline dependencies and desupport old platforms faster than PG might want to but there one could make the case to just use the older meson or CMake.
So before the discussion whether the gains from switching build systems would offset the pain, I think the project needs to decide whether a newer build system is acceptable in the first place as it has a chance of desupporting a platform alltogether or at least making it more painful for some platforms by adding the bootstrap step for the build system with potentially cascading dependencies (get Python 3 working, get ninja bootstrapped, get PG built or get libuv built, get CMake built, get PG built).
The above is all about getting the build system to work at all. If that isn't a showstopper there's a subsequent discussion to be had about older platforms where one could get the build system to work but convenient packages are missing. For example not even RHEL7 has any Python3 packages in the base system (it does in Software Collections though) which means some extra hoops on getting meson running there. And RHEL5 is in an even worse spot as it has no Software Collections, who knows if Python 3 builds on it out of the box etc.
Re: Is a modern build system acceptable for older platforms
From
Darafei "Komяpa" Praliaskouski
Date:
The above is all about getting the build system to work at all. If that isn't a showstopper there's a subsequent discussion to be had about older platforms where one could get the build system to work but convenient packages are missing. For example not even RHEL7 has any Python3 packages in the base system (it does in Software Collections though) which means some extra hoops on getting meson running there. And RHEL5 is in an even worse spot as it has no Software Collections, who knows if Python 3 builds on it out of the box etc.
I would expect that a new version of software should not target versions of platform that are end of full support. Per https://access.redhat.com/support/policy/updates/errata currently only RHEL7 is at Full Support, and RHEL5 is already past Product Retirement. I would say it's fine to support these at already released branches, but limiting .
PostGIS has several forks that move it towards CMake (five-year-old ticket https://trac.osgeo.org/postgis/ticket/2362, forks https://github.com/nextgis-borsch/postgis, https://github.com/mloskot/postgis/tree/cmake-build) - none of these are upstream mostly because there's an expectation to match the Postgres build system. If Postgres moved to CMake (there are already CMake-enabled forks available for people who) then I expect PostGIS to quickly catch up.
A lot of libraries PostGIS depends on are already built using CMake, so if the platform has recent PostGIS it has CMake available somehow.
Darafei Praliaskouski,
GIS Engineer / Juno Minsk
=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net> writes: >> The above is all about getting the build system to work at all. If that >> isn't a showstopper there's a subsequent discussion to be had about older >> platforms where one could get the build system to work but convenient >> packages are missing. ... > I would expect that a new version of software should not target versions of > platform that are end of full support. The other side of that argument is that allowing a build system we haven't even adopted yet to dictate which platforms we can support is definitely letting the tail wag the dog. My gut reaction to Catalin's list is that requiring C+11 is a pretty darn high bar to clear for older platforms. I have a positive impression of python's portability, so requiring a recent python version might not be too awful ... but then requiring ninja pretty much tosses away the advantage again. So, while in principle you could probably get these toolchains going on an old platform, the reality is that moving to either will amount to "we're desupporting everything that wasn't released in this decade". That's a pretty big shift from the project's traditional mindset. It's possible that our users wouldn't care; I don't know. But to me it's a significant minus that we'd have to set against whatever pluses are claimed for a move. regards, tom lane
My gut reaction to Catalin's list is that requiring C+11 is a pretty
darn high bar to clear for older platforms.
It's only for latest version and we can support version 3.9 with C++98 I think at least 5 years.
3.9.6 was realease in November 10, 2017 .
That's a pretty big shift from the project's traditional
mindset.
Sure, but I think time to time it should be happen.
But to me it's a significant minus that we'd have to set against whatever
pluses are claimed for a move.
It's obvious minuses but I still can't understand your position on this question.
Regards
On Thu, Apr 19, 2018 at 10:16 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > The other side of that argument is that allowing a build system we haven't > even adopted yet to dictate which platforms we can support is definitely > letting the tail wag the dog. > > My gut reaction to Catalin's list is that requiring C+11 is a pretty > darn high bar to clear for older platforms. I have a positive impression > of python's portability, so requiring a recent python version might not > be too awful ... but then requiring ninja pretty much tosses away the > advantage again. So, while in principle you could probably get these > toolchains going on an old platform, the reality is that moving to either > will amount to "we're desupporting everything that wasn't released in > this decade". That's a pretty big shift from the project's traditional > mindset. It's possible that our users wouldn't care; I don't know. > But to me it's a significant minus that we'd have to set against whatever > pluses are claimed for a move. Yeah, I agree. I am not deathly opposed to moving, but I'd like to be convinced that we're going to get real advantages from such a move, and so far I'm not. The arguments thus far advanced for moving boil down to (1) the current system is kind of old and creaky, which is true but which I'm not sure is by itself a compelling argument for changing anything, and (2) it might make things easier on Windows, which could be a sufficiently good reason but I don't think I've seen anyone explain exactly how much easier it will make things and in what ways. I think it's inevitable that a move like this will create some disruption -- developers will need to install and learn new tools, buildfarm members will need updating, and there will be some bugs. None of that is a blocker, but the gains need to outweigh those disadvantages, and we can't judge whether they do without a clear explanation of what the gains will be. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
(2) it might make things easier on Windows,
which could be a sufficiently good reason but I don't think I've seen
anyone explain exactly how much easier it will make things and in what
ways.
1. You can remove tools/msvc folder because all your build rules will be universal. (cmake build now have much fewer lines of code)
2. You can forget about terminal in Windows (for windows guys it's important)
3. You can normally check environment on Windows, right now we have hardcoded headers and many options. Configure process will be same on all platforms.
4. You can generate not only GNU Make or MSVC project, you also can make Xcode projects, Ninja or NMake for build under MSVC Make. For Windows, you also can easily change MSVC to Clang it's not hardcoded at all.
5. With CMake you have an easy way to build extra modules (plugins), I have already working prototype for windows PGXS. A plugin should just include .cmake file generated with Postgres build.
Example: https://github.com/stalkerg/postgres_cmake/blob/cmake/contrib/adminpack/CMakeLists.txt If PGXS is True it's mean we build module outside postgres.
Example: https://github.com/stalkerg/postgres_cmake/blob/cmake/contrib/adminpack/CMakeLists.txt If PGXS is True it's mean we build module outside postgres.
But in my opinion, you should just try CMake to figure out all benefits.
> we can't judge whether they do without a clear explanation of what the gains will be
I think it's not that thing what easy to explain. Main benefits not in unix console area and C language...
On 27/04/18 19:10, Yuriy Zhuravlev wrote: > > 1. You can remove tools/msvc folder because all your build rules will > be universal. (cmake build now have much fewer lines of code) > 2. You can forget about terminal in Windows (for windows guys it's > important) > 3. You can normally check environment on Windows, right now we have > hardcoded headers and many options. Configure process will be same on > all platforms. > 4. You can generate not only GNU Make or MSVC project, you also can > make Xcode projects, Ninja or NMake for build under MSVC Make. For > Windows, you also can easily change MSVC to Clang it's not hardcoded > at all. > 5. With CMake you have an easy way to build extra modules (plugins), I > have already working prototype for windows PGXS. A plugin should just > include .cmake file generated with Postgres build. > Example: > https://github.com/stalkerg/postgres_cmake/blob/cmake/contrib/adminpack/CMakeLists.txt > If PGXS is True it's mean we build module outside postgres. > > But in my opinion, you should just try CMake to figure out all benefits. > > I note that Mysql (yeah I know, we don't love 'em greatly, but their product niche is similar to ours) and Ceph (ok it is a distributed storage system but still a highly popular open src product) have switched to using cmake (relatively) recently. Both these projects were using autoconf etc related builds previously and seem to be doing just fine with cmake. regards Mark
On 27.04.2018 10:45, Mark Kirkwood wrote: > I note that Mysql (yeah I know, we don't love 'em greatly, but their > product niche is similar to ours) and Ceph (ok it is a distributed > storage system but still a highly popular open src product) have > switched to using cmake (relatively) recently. Both these projects were > using autoconf etc related builds previously and seem to be doing just > fine with cmake. I lived through that transition at MySQL, and later at SkySQL/MariaDB Windows builds have been using CMake since somewhere in the MySQL 5.0 series at least already. For a while autotools and CMake build systems coexisted side by side, until everything was unified to use CMake only in the 5.5 series which became "GA" in 2010, so "(relatively) recently" is rather relative. Having to maintain two different build systems, and keep them in sync, obviously wasn't a good thing to do in the long run and CMake (plus CPack and friends) has proven itself to be "good enough" for quite a while. There are several things that autotools provide out of the box that I still miss with CMake. The most important one being "make distcheck" to check that creating a source distribution tarball, unpacking it, doing an "out of source" build with it that does not modify the actual source tree, runs test and applies some release best practice sanity checks by e.g. checking whether the ChangeLog file looks up to date. As far as I can tell there's no CMake equivalent to that at all, which is especially "funny" as CMake used to advertise their preference for out-of-source builds as an advantage over autotools. We often enough ended up with builds writing to the source directory tree instead of the build tree over the years. Makefiles generated by automake are more feature rich in general, which is understandable as its the only backend it has to support. Some of the CMake choices there are just weird though, like their refusal to have "make uninstall" supported out of the box. Some may also think that getting rid of a mix of bash, m4, and Makefile code snippets may be a good thing (which in itself is true), but CMake replaces this with its own home grown language that's not used anywhere else, and which comes with only a very rudimentary lex/yacc parser, leading to several semi-consistent function argument parsing hacks. The bundled pacakge libs that come with CMake made some builds more easy, but this part didn't seem to be seeing much love anymore last time I looked. Meanwhile the Autoconf Macro Archive has at least partly closed that gap. Also last time I looked CMake hat nothing really comparable to autotools submodules (bundled sub-projects that come with their own autotools infrastructure and could be built standalone, but in a bunlded context will inherit all "configure" settings from the top level invocation). There was also some weird thing about CMake changing shared library default locations in already built binaries on the fly on "make install", so that they work both in the install and build context, e.g. for running tests before installing. Autotools handle this by building for the install context right away, and setting up wrapper scripts that set up load paths for libs in the build context for pre-install testing. In this particular case I don't really trust either approach, so that one's a tie. What else? CMakes more aggressive caching behavior can be confusing, but then again that's really just a matter of preference. It's command line argument parsing and help output is inferior to autotools configure, all project specific options have to start with a -D, and help output for these is strictly alphabetical, while with autoconf you can group related options in help output, and on modern Linux distributions there's good tab completion for them, too. cmake-gui is advertised to solve much of this, but it still suffers from the alphabetic listing problem. I could probably continue with this brain dump forever, but in the end it comes down to: There's no real alternative when you have to support windows, and it is "good enough" on Unix, so that maintaining CMake and autotool build setups in parallel isn't really justified in the long run" PS: when you actually watch a full MariaDB CMake -- Hartmut, former MySQL guy now working for MariaDB
Makefiles generated by automake are more feature rich in general,
which is understandable as its the only backend it has to support.
The main problem here - Postrges do not use automake at all!
Postgres it's autoconf + handmade GNU Make files + perl script for generating old MSVC project from this Makefiles.
"make distcheck"
CMake have no this bad concept, in my opinion, if you want to make the project you should have a full build environment. (but I don't want to argue about it here)
Also, as I wrote before, CMake it's not equivalent of GNU Make or Autoconf, many your reasons based on that fact what CMake, is not a build system it's more like project generation system.
And anyway, you have no option if you want to support Windows without pain and much more hacks ways.
On 27 April 2018 at 15:10, Yuriy Zhuravlev <stalkerg@gmail.com> wrote: > 1. You can remove tools/msvc folder because all your build rules will be > universal. (cmake build now have much fewer lines of code) Which is nice, but not actually a major day to day impact. > 2. You can forget about terminal in Windows (for windows guys it's > important) OK, but it's not really important for the PostgreSQL project, IMO. Also, most people working on PostgreSQL are probably less bothered by the terminal. > 3. You can normally check environment on Windows, right now we have > hardcoded headers and many options. Configure process will be same on all > platforms. Again, nice, but does that solve a real current problem? > 4. You can generate not only GNU Make or MSVC project, you also can make > Xcode projects, Ninja or NMake for build under MSVC Make. For Windows, you > also can easily change MSVC to Clang it's not hardcoded at all. Yeah, that's nice, but again, what're the real world benefits? > 5. With CMake you have an easy way to build extra modules (plugins), I have > already working prototype for windows PGXS. A plugin should just include > .cmake file generated with Postgres build. Yep. FWIW, I already use CMake for some PostgreSQL extensions because of PGXS limitations and Windows support. I won't say I'm a big fan, the documentation is a bit stale and it has some weird quirks and limitations, but compared to autohell it's pure magic. > But in my opinion, you should just try CMake to figure out all benefits. I use it fairly regularly. I'd never use autotools for any project I was starting myself. But that doesn't mean converting the whole postgres project is a good idea. I'd do it, personally. But it's not just up to me. I've yet to hear something that's compelling to a team who still set Perl 5.8.8 as the minimum version and support SunOS. You'll need a compelling argument that it's worth the pain and distruption. -- Craig Ringer http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On 28.04.2018 05:27, Yuriy Zhuravlev wrote: > "make distcheck" > > CMake have no this bad concept, in my opinion, if you want to make the > project you should have a full build environment. (but I don't want to > argue about it here) this is not about having a working build environment, it is about having a fully working and correct source tarball before distributing it as a new release. What "make distcheck" does is an end-to-end check of the configuration and build process itself. So what it does is: * create a source tarball, putting the version number given in configure.ac into the tarball name * unpack the source tarball in a temporary directory, make everything read-only * create a build directory, run "configure" in there * build with "make" * run project test suite with "make check" * run "make install" into a tmp directory, then "make uninstall", to check that installation works, and that uninstall removes everything again * run "make distclean" to check that cleanup really works as expected So things spotted by this are e.g. * missing files that didn't end up in the src tarball ("works on my computer") * files being created in srcdir instead of builddir during build * installed files missed by uninstall (ok, CMake developers have a strong opinion about "make uninstall anyway) * generated files that are not properly getting cleaned up * ... more things *** Especially the srcdir vs builddir is one I'm missing very much with CMake, it happened several times that such problems have slipped through in MySQL and MariaDB releases, and I've seen it in other projects using CMake, too -- hartmut
Which is nice
OK
Again, nice
Yeah, that's nice
I already use CMake
I'd do it, personally
I suppose we have no one silver bullet reason to change autoconf+make to CMake but it's cumulative impression.
Also, I suppose this thread started to resolve at least one small question. It's should be something like voting, but I see here only a few people and your and Tom's
answers very strange, you can't say definitely yes or no and you more thinking about another people who not exist here.
----
Some specific anwers:
Some specific anwers:
Again, nice, but does that solve a real current problem?
This is main reason why PGXS on windows not exist, also it's solve problems with differents between MSVC versions and future releases.
Yeah, that's nice, but again, what're the real world benefits?
it's just convinient, for easy work with Xcode for example.
Also, most people working on PostgreSQL are probably less bothered by
the terminal.
It's another one reson why windows users and students don't want to hack Postgres.
Yep. FWIW, I already use CMake for some PostgreSQL extensions because
of PGXS limitations and Windows support.
And now, it can be problem if your postgres build by mingw or llvm for example. (even under MSVC you can use clang now)
Enviroment start change quickly and it's too much efforts to support all this by yourself.
this is not about having a working build environment, it is about having
a fully working and correct source tarball before distributing it as a
new release.
Sorry, I did not understand correctly it before.
I suppose it's not big problem especial if you have CI and tests farm.
And anyway in Postgres distcheck is handmade code and you can make the same script for CMake too:
https://github.com/stalkerg/postgres_cmake/blob/cmake/GNUmakefile.in#L111
https://github.com/stalkerg/postgres_cmake/blob/cmake/GNUmakefile.in#L111
and it's not working for windows. ;)
You should forget about Postgres as common Autotools project.
On Fri, Apr 27, 2018 at 5:46 AM, Hartmut Holzgraefe <hartmut.holzgraefe@gmail.com> wrote: > I could probably continue with this brain dump forever, ... I found your brain dump an interesting read, and I have to say that it leaves me rather uninspired about making a change. It sounds to me like if we change, some things will be better and others will not be as good. The good news is that if we decide to change, it sounds like we won't be a lot worse off than we are today. The bad news is that it doesn't sound like we'll be a lot better off, either. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 2018-05-01 12:19:28 -0400, Robert Haas wrote: > On Fri, Apr 27, 2018 at 5:46 AM, Hartmut Holzgraefe > <hartmut.holzgraefe@gmail.com> wrote: > > I could probably continue with this brain dump forever, ... > > I found your brain dump an interesting read, and I have to say that it > leaves me rather uninspired about making a change. It sounds to me > like if we change, some things will be better and others will not be > as good. The good news is that if we decide to change, it sounds like > we won't be a lot worse off than we are today. The bad news is that > it doesn't sound like we'll be a lot better off, either. How is being able to build extensions on windows reasonably not an improvement? It's really hard to build pgxs like stuff on windows right now. Also not having to maintain a fair amount of visual studio project generation code? And getting faster builds that don't suffer from weird parallelism issues because dependencies can't be expressed properly in parallel make? ... It seems fair to argue that it's not worth the pain to get there, but how it'd not be an improvement to be there I really don't get. Greetings, Andres Freund
I'd like to add my 2c that, as a user who has to support postgres running on some fairly old systems, changing to a modern build mechanism (with all the resultant dependency hell that it would likely introduce) would be likely to cause me much grief.
At the moment I can still log in to the old RH Shrike box I keep specifically for building for older systems (it does admittedly have a more recent gcc, but even building that was a trial) and build Postgres from source. Unless I've misunderstood I strongly doubt that would still be the case with the changes being discussed here.
Geoff
On Tue, 1 May 2018 at 17:19, Robert Haas <robertmhaas@gmail.com> wrote:
On Fri, Apr 27, 2018 at 5:46 AM, Hartmut Holzgraefe
<hartmut.holzgraefe@gmail.com> wrote:
> I could probably continue with this brain dump forever, ...
I found your brain dump an interesting read, and I have to say that it
leaves me rather uninspired about making a change. It sounds to me
like if we change, some things will be better and others will not be
as good. The good news is that if we decide to change, it sounds like
we won't be a lot worse off than we are today. The bad news is that
it doesn't sound like we'll be a lot better off, either.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Tue, May 1, 2018 at 12:31 PM, Andres Freund <andres@anarazel.de> wrote: > How is being able to build extensions on windows reasonably not an > improvement? It's really hard to build pgxs like stuff on windows right > now. Also not having to maintain a fair amount of visual studio project > generation code? And getting faster builds that don't suffer from weird > parallelism issues because dependencies can't be expressed properly in > parallel make? ... Sure, those are notable advantages. Thanks for articulating them so clearly. > It seems fair to argue that it's not worth the pain to get there, but > how it'd not be an improvement to be there I really don't get. Well that's probably because you understand cmake. I don't. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Andres Freund <andres@anarazel.de> writes: > On 2018-05-01 12:19:28 -0400, Robert Haas wrote: >> I found your brain dump an interesting read, and I have to say that it >> leaves me rather uninspired about making a change. It sounds to me >> like if we change, some things will be better and others will not be >> as good. The good news is that if we decide to change, it sounds like >> we won't be a lot worse off than we are today. The bad news is that >> it doesn't sound like we'll be a lot better off, either. > How is being able to build extensions on windows reasonably not an > improvement? That indeed would be an improvement, but maybe we could fix that specific pain point without having to throw away twenty years worth of work? The amount of accumulated knowledge we've got in the existing build system is slightly staggering ... so I'm afraid that moving to a different one would involve a lot of expensive re-invention of portability hacks. Of course, blowing off support for any platform not released in the last five years would cut down on the number of such hacks that we'd need to reinvent. But that's not a tradeoff I especially like either. regards, tom lane
On Tue, May 1, 2018 at 12:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Andres Freund <andres@anarazel.de> writes: >> On 2018-05-01 12:19:28 -0400, Robert Haas wrote: >>> I found your brain dump an interesting read, and I have to say that it >>> leaves me rather uninspired about making a change. It sounds to me >>> like if we change, some things will be better and others will not be >>> as good. The good news is that if we decide to change, it sounds like >>> we won't be a lot worse off than we are today. The bad news is that >>> it doesn't sound like we'll be a lot better off, either. > >> How is being able to build extensions on windows reasonably not an >> improvement? > > That indeed would be an improvement, but maybe we could fix that specific > pain point without having to throw away twenty years worth of work? > Indeed. It's possibly today to use CMake without a huge amount of difficulty to build extensions out of tree against MSVC-built postgres. This was more or less the topic of my talk on Ottawa last year, based on some excellent work by Craig Ringer. To my certain knowledge this is being used successfully today. Testing is a different story, but building is a nut that's more or less been cracked. There is also the point that EDB, according to my understanding, is considering moving back, or has perhaps already moved back, to Msys/Mingw-64 based builds, due to the runtime hell that MSVC can get you into. And we know perfectly well how to build extensions out of tree against such a build. You do it just like on Unix. > The amount of accumulated knowledge we've got in the existing build system > is slightly staggering ... so I'm afraid that moving to a different one > would involve a lot of expensive re-invention of portability hacks. > > Of course, blowing off support for any platform not released in the > last five years would cut down on the number of such hacks that we'd > need to reinvent. But that's not a tradeoff I especially like either. > No, me either. CMake is hardly a bed of roses, either, BTW. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hello Geoff!
About cmake:
1. You can still use the binary build for your system.
2. You can still build Postgres from source and with old gcc, you need only install cmake (it's very easy) Only most modern versions of CMake depend on modern gcc. I have good experience with old Solaris and AIX. (I mean build Postgres by current cmake branch).
3. You can try and put your impressions on issue tracker on github.
Thanks.
2018-05-02 1:41 GMT+09:00 Geoff Winkless <pgsqladmin@geoff.dj>:
I'd like to add my 2c that, as a user who has to support postgres running on some fairly old systems, changing to a modern build mechanism (with all the resultant dependency hell that it would likely introduce) would be likely to cause me much grief.At the moment I can still log in to the old RH Shrike box I keep specifically for building for older systems (it does admittedly have a more recent gcc, but even building that was a trial) and build Postgres from source. Unless I've misunderstood I strongly doubt that would still be the case with the changes being discussed here.GeoffOn Tue, 1 May 2018 at 17:19, Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Apr 27, 2018 at 5:46 AM, Hartmut Holzgraefe
<hartmut.holzgraefe@gmail.com> wrote:
> I could probably continue with this brain dump forever, ...
I found your brain dump an interesting read, and I have to say that it
leaves me rather uninspired about making a change. It sounds to me
like if we change, some things will be better and others will not be
as good. The good news is that if we decide to change, it sounds like
we won't be a lot worse off than we are today. The bad news is that
it doesn't sound like we'll be a lot better off, either.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
That indeed would be an improvement, but maybe we could fix that specific
pain point without having to throw away twenty years worth of work?
Indeed! Only a few thousands of lines of code can generate the whole you manually wrote, it's the perfect result!
re-invention of portability hacks
This is the main goal for migrating to cmake - most of hacks cmake takes on itself.
Of course, blowing off support for any platform not released in the
last five years would cut down on the number of such hacks that we'd
need to reinvent.
I suppose it's wrong goal and speculations:
1. 5 years old system we can support out of a box, it should just work.
2. up to 10 years will support too but maybe with some small extra actions from users. (for example, even now windows user should do tons of extra actions to build postgres)
3. >10 I think can work but it's should be the enthusiasm of some users, we shouldn't worry about it.
I think ten years plus open doors for more it's not same as just 5 years. This looks like a good trade-off.
> Indeed. It's possibly today to use CMake without a huge amount of
difficulty to build extensions out of tree against MSVC-built
postgres.
difficulty to build extensions out of tree against MSVC-built
postgres.
How? All builds what I saw was with tons of hacks.
On windows, Postgres can build against Mingw, many versions of MSVC and etc
Also, you can build Postgres without some features or with extra and no good way to put this knowledge to CMake build system.
At least we should replace Windows build system by cmake and if your worry about consistency of source files (it's very small problem actually) you can use current
Perl script to generate files list for CMake, it will be same as your 1.5 build system.
On Tue, May 1, 2018 at 8:20 PM, Yuriy Zhuravlev <stalkerg@gmail.com> wrote: >> Indeed. It's possibly today to use CMake without a huge amount of > difficulty to build extensions out of tree against MSVC-built > postgres. > > > How? All builds what I saw was with tons of hacks. There is a simple example here: <https://bitbucket.org/adunstan/pg-closed-ranges/src/0475b50ff793ce876a78c96d72903c9793a98fc1/?at=cmake> No tons of hacks. > On windows, Postgres can build against Mingw, many versions of MSVC and etc > Also, you can build Postgres without some features or with extra and no good > way to put this knowledge to CMake build system. > > > At least we should replace Windows build system by cmake and if your worry > about consistency of source files (it's very small problem actually) you can > use current > Perl script to generate files list for CMake, it will be same as your 1.5 > build system. That would just add to the knowledge that developers and committers would need. "One more level of indirection" is rarely the right solution. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
No tons of hacks.
And functions too https://bitbucket.org/adunstan/pg-closed-ranges/raw/0475b50ff793ce876a78c96d72903c9793a98fc1/cmake/FindPostgreSQL.cmake
I mean things like HAVE_LONG_LONG_INT you can't figure out on "configure" stage without parsing config.h in CMake. Also, maybe I am wrong but you can't check 32/64 bit consistent between target postgres and your current environment.
etc and etc
That would just add to the knowledge that developers and committers
would need. "One more level of indirection" is rarely the right
solution.
A lot of similar projects made this transformation and came to CMake, what problem with Postgres?
After small check I found next:
we need gcc 4.8 anyway for libjit and it means RHEL 7 and newer: https://access.redhat.com/solutions/19458
because 4.8 needed to build LLVM.
because 4.8 needed to build LLVM.
On May 1, 2018 9:26:27 PM PDT, Yuriy Zhuravlev <stalkerg@gmail.com> wrote: >After small check I found next: >we need gcc 4.8 anyway for libjit and it means RHEL 7 and newer: >https://access.redhat.com/solutions/19458 >because 4.8 needed to build LLVM. We don't use libjit. As for the llvm stuff - that's an optional dependency. I.e. irrelevant as far as determining baselinerequirements is concerned. Andres -- Sent from my Android device with K-9 Mail. Please excuse my brevity.
On Wed, 2 May 2018 at 00:57, Yuriy Zhuravlev <stalkerg@gmail.com> wrote:
Hello Geoff!About cmake:1. You can still use the binary build for your system.2. You can still build Postgres from source and with old gcc, you need only install cmake (it's very easy) Only most modern versions of CMake depend on modern gcc. I have good experience with old Solaris and AIX. (I mean build Postgres by current cmake branch).3. You can try and put your impressions on issue tracker on github.
First, please don't top post.
Second, I can't "use the binary build", there isn't one for the systems I'm talking about.
Third, as you said, newer cmake refuses to build on this system. Admittedly v2 built fine, but how long until someone tells me something like "oh well, we need to use bracket arguments otherwise our files are terribly hard to read so you need v3. It shouldn't be that hard to build, you only need to compile gcc 4, and that's at least 5 years old, so it's time you upgraded".
Being blunt, unless I've missed the point all the arguments I've read so far for cmake seem to be advantages for the developers, not the users. As developers who put in your time you are of course entitled to make your lives easier but I'm just making the counterpoint that if you do so at the expense of your users you lose a certain amount of goodwill. It's up to you all how much that matters.
Geoff
Geoff Winkless <pgsqladmin@geoff.dj> writes: > Being blunt, unless I've missed the point all the arguments I've read so > far for cmake seem to be advantages for the developers, not the users. As > developers who put in your time you are of course entitled to make your > lives easier but I'm just making the counterpoint that if you do so at the > expense of your users you lose a certain amount of goodwill. It's up to you > all how much that matters. Yeah, one of the things that I find to be a very significant turn-off in these proposals is that they'd break the "configure; make; make install" ritual that so many people are accustomed to. User-unfriendly decisions like cmake's approach to configuration switches (-D? really?) are icing on top of what's already an un-tasty cake. What we do internally is our business, but these things are part of the package's API in a real sense. Changing them has a cost, one that's not all borne by us. regards, tom lane
On 02.05.2018 16:22, Tom Lane wrote: > (-D? really?) it's worse ... "cmake -L" only produces a alphabetically sorted list of known -D settings, just listing the names without description. That's so far behind from what ./configure --help produces. (And don't get me started on cmake-gui. One day I may even eventually complete my "autotools-gui" ... https://github.com/hholzgra/autogui ) But at least on most Linux distributions TAB completion now works for CMake -D options these days ... -- hartmut
On Tue, May 1, 2018 at 8:12 PM, Yuriy Zhuravlev <stalkerg@gmail.com> wrote: >> That indeed would be an improvement, but maybe we could fix that specific >> pain point without having to throw away twenty years worth of work? > > Indeed! Only a few thousands of lines of code can generate the whole you > manually wrote, it's the perfect result! I don't think that unsubstantiated hyperbole is the right way to approach the task of convincing the community to adopt the approach you prefer. I don't see that any compelling evidence has been presented that a cmake-based solution would really save thousands of lines of code. True, some Perl code that we have now to generate project files and so forth would go away, but I bet we'll end up adding new code someplace else because of something-or-other that doesn't work the same under cmake that it does under the current build system. For example: >> re-invention of portability hacks > This is the main goal for migrating to cmake - most of hacks cmake takes on > itself. Whatever hacks cmake *doesn't* handle will have to be reimplemented in the cmake framework, and frankly, if history is any indication, we'll be very lucky indeed if whoever submits the cmake patches is willing to follow up on the things that break over the days, weeks, months, years that follow the original commit. More likely, after the first few commits, or perhaps the first few months, they'll move on to their next project and leave it to the committers to sort out whatever stuff turns out to be broken later. And very likely, too, they'll not handle all the changes that are needed on the buildfarm side of things, and maybe the PGXN side of things if that needs changes, and they certainly won't update every third-party module in existence to use the cmake framework. Accepting a patch to support cmake means some amount of work and adaptation will need to be done by hundreds of developers on both the core PostgreSQL code base and various other code bases, open source and proprietary. Now it's probably not a lot of work for any individual person, but it's a lot of work and disruption over all. It has to be worth it. Now, I grant that my ears perked up when Andres mentioned making parallel make work better. I don't build on Windows so that issue doesn't affect me personally -- it's great if it can be made to work better with or without cmake but I don't have a view on the best way forward. But having parallel make work better and more efficiently and with fewer hard-to-diagnose failure modes would definitely be nice. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 02.05.2018 17:44, Robert Haas wrote: > But having parallel make work better and more efficiently > and with fewer hard-to-diagnose failure modes would definitely be > nice. that's especially a thing I haven't seen in "our" environment, this was an area where autotools and cmake didn't really differ, at least not for the Unix/Makefile side of things. The only thing about parallelism I remember that it sometimes doesn't work well with the progress percentage output of cmake generated makefiles ... but that's purely cosmetic. -- hartmut
On 2018-05-02 23:43:50 +0200, Hartmut Holzgraefe wrote: > On 02.05.2018 17:44, Robert Haas wrote: > > But having parallel make work better and more efficiently > > and with fewer hard-to-diagnose failure modes would definitely be > > nice. > > that's especially a thing I haven't seen in "our" environment, > this was an area where autotools and cmake didn't really differ, > at least not for the Unix/Makefile side of things. Recursive make like ours can't do full parallelism because dependencies can't be fully expressed. With cmake that's not an issue. And its ninja generator ends up being considerably faster than makefiles. Now you could argue that we could just rewrite to non-recursive make. But that'd be nearly as much work as migrating to another buildsystem. Greetings, Andres Freund
Andres Freund <andres@anarazel.de> writes: > Now you could argue that we could just rewrite to non-recursive > make. But that'd be nearly as much work as migrating to another > buildsystem. I'm sure it'd be a significant amount of work ... but it wouldn't require redesigning any configuration or portability hacks, nor any change in tool prerequisites, and at least in principle it wouldn't require changes in users' build scripts. So I think claiming it's as expensive as migrating to cmake is almost certainly wrong. (I don't know offhand if tricks like "build plpython only" would still work unchanged, but that's a sufficiently niche usage that I'd not be too concerned about making those people adapt their scripts.) regards, tom lane
On Wed, May 2, 2018 at 5:44 PM, Robert Haas <robertmhaas@gmail.com> wrote: > I don't think that unsubstantiated hyperbole is the right way to > approach the task of convincing the community to adopt the approach > you prefer. I don't see that any compelling evidence has been > presented that a cmake-based solution would really save thousands of > lines of code. Let me try to list the advantages as I see them. * ability to use ninja ** meson's requirement to use ninja might be a disadvantage but ability to use is definitely good ** faster than make - the difference is really noticeable ** good dependency story, parallel everything ** simply a very nice developer experience, for example the screen is not filled with scrolling lines instead progress updates shown as x/y files to go, currently at file z; try it and you'll see what I mean ** I got interested in the ninja for PG and therefore CMake or meson after trying clang-cl.exe for PG on Windows. clang-cl.exe is a drop in open source replacement for Microsoft's cl.exe but using it does not interact well with the fact that MSBuild runs only one cl.exe with lots of .c files as input and expects cl.exe to handle the parallelism while clang-cl.exe does not handle any parallelism taking the position that the build system should handle that. Being able to invoke clang-cl.exe from ninja instead of MSBuild would make fast compilation with clang-cl.exe easy while now only slow serial compilation is possible. * IDE integration: ** cmake and meson can generate XCode and Visual Studio, granted Visual Studio already works via the MSVC scripts ** CLion can consume cmake giving a good IDE story on Linux which PG currently lacks * get rid of the ad-hoc MSVC generation Perl scripts ** granted, I looked at those recently in the clang-cl context above and they're reasonably understandable/approachable even without knowing too much Perl * appeal to new developers ** I think it's not a controversial statement that, as time passes, autotools and make syntax are seen more and more as arcane things that only old beards know how to handle and that the exciting stuff moved elsewhere; in the long run this is a real problem ** on the other hand, as an autotools almost complete novice, after reading some autotools docs, I was pleasantly surprised at how small and easy to follow Andres' build patch adding LLVM and C++ support was, especially as it's doing big, non conventional changes: add support for another compiler but in a specific "emit LLVM bitcode" mode, add support for C++ etc. So autoconf ugliness is not that big of a deal but perception does matter. * PGXS on Windows ** could be solvable without moving wholesale From the above, I would rate ninja as a high nice to have, IDE, PGXS on Windows and new developers as medium high nice to haves (but see below for long term concerns) and no MSVC Perl scripts as low nice to have. I started the thread as it seemed to me energy was consumed to move to another system (proof of concept and discussions) while it wasn't even clarified whether a new system isn't a complete no go due to the old platforms PG supports. I find Tom's and Robert's position of "acceptable but we would need to see real benefits as there definitely are real downsides" perfectly reasonable. The build system dictating platform support would indeed be the tail wagging the dog. Personally, with the current information I'd not vote for switching to another system, mainly because I ultimately think developer convenience should not trump end user benefits. I do have a real concern about the long term attractiveness of the project to new developers, especially younger ones as time passes. It's not a secret that people will just avoid creaky old projects, and for Postgres old out of fashion things do add up: autoconf, raw make, Perl for tests, C89, old platform support. I have no doubt that the project is already loosing competent potential developers due to this. One can say this is superficial and those developers should look at the important things but that does not change reality that some will just say pass because of dislike of the old technologies I mentioned. Personally, I can say that if the project were still in CVS I would probably not bother as I just don't have energy to learn an inferior old version control system especially as I see version control as fundamental to a developer. I don't feel the balance between recruiting new developers and end user benefits tilted enough to replace the build system but maybe in some years that will be the case.
I don't think that unsubstantiated hyperbole is the right way to
approach the task of convincing the community to adopt the approach
you prefer.
It's not a hyperbole it's fact and I even talked about it on conference.
You should just compare all my cmake files with Makefile+.in+.m4 (and msvc folder) it was significant reduce code to maintain.
Anyway all my intention in this field it's to reduce pain and reduce suppor time for build system.
cat `find ./ | grep '\.in\|\.m4\|Makefile\|\/msvc\/'` | wc
22942 76111 702163
22942 76111 702163
cat `find ./ | grep 'CMakeLists\|\.cmake'` | wc
9160 16604 278061
9160 16604 278061
If compare the same style as in Makefile it will be ~3000 (you can just compare words ;) )
Regards.
On 2018-05-03 09:29:32 +0900, Yuriy Zhuravlev wrote: > > > > I don't think that unsubstantiated hyperbole is the right way to > > approach the task of convincing the community to adopt the approach > > you prefer. > > > It's not a hyperbole it's fact and I even talked about it on conference. > You should just compare all my cmake files with Makefile+.in+.m4 (and msvc > folder) it was significant reduce code to maintain. > Anyway all my intention in this field it's to reduce pain and reduce suppor > time for build system. > Curren state: > > cat `find ./ | grep '\.in\|\.m4\|Makefile\|\/msvc\/'` | wc > 22942 76111 702163 > > cat `find ./ | grep 'CMakeLists\|\.cmake'` | wc > 9160 16604 278061 Given that you don't have feature parity this just seems like trolling. > and also, I use code style when a source file names every time on new > line... it's serious increase numbers of line. > If compare the same style as in Makefile it will be ~3000 (you can just > compare words ;) ) Right, because m4 is uses so few lines.
2018-05-03 9:32 GMT+09:00 Andres Freund <andres@anarazel.de>:
On 2018-05-03 09:29:32 +0900, Yuriy Zhuravlev wrote:
> >
> > I don't think that unsubstantiated hyperbole is the right way to
> > approach the task of convincing the community to adopt the approach
> > you prefer.
>
>
> It's not a hyperbole it's fact and I even talked about it on conference.
> You should just compare all my cmake files with Makefile+.in+.m4 (and msvc
> folder) it was significant reduce code to maintain.
> Anyway all my intention in this field it's to reduce pain and reduce suppor
> time for build system.
> Curren state:
>
> cat `find ./ | grep '\.in\|\.m4\|Makefile\|\/msvc\/'` | wc
> 22942 76111 702163
>
> cat `find ./ | grep 'CMakeLists\|\.cmake'` | wc
> 9160 16604 278061
Given that you don't have feature parity this just seems like trolling.
I have. I have some lacks with .po generation and documentation but all! other features same, I even can run tap tests.
Look into my task issue list https://github.com/stalkerg/postgres_cmake/issues it's can increase number of lines maximum on 10%.
On 2018-05-03 09:42:49 +0900, Yuriy Zhuravlev wrote: > 2018-05-03 9:32 GMT+09:00 Andres Freund <andres@anarazel.de>: > > Given that you don't have feature parity this just seems like trolling. > > > > I have. I have some lacks with .po generation and documentation but all! > other features same, I even can run tap tests. > Look into my task issue list > https://github.com/stalkerg/postgres_cmake/issues it's can increase number > of lines maximum on 10%. You detect like a third of the things that the old configure detected. Most of the comments of converted tests are missing. The thread safety check definitely aren't comparable. The int128 type checks aren't comparable. No LLVM detection. The atomics check don't guard against compilers that allow to reference undefined functions at compile time. That's like a 60s scan of what you have. Sorry, but comparing lines at that state is just bullshit. Greetings, Andres Freund
Sorry, but comparing lines at that state is just bullshit.
I totally disagree, proportions will be same in any case.
Most of the comments of converted tests are missing.
Add 100-500 lines? ok.
You detect like a third of the things that the old configure
detected.
I tried to use CMake way when it exists but for some other things, I porting checking from old autoconf system.
The
thread safety check definitely aren't comparable. The int128 type checks
aren't comparable.
The atomics check don't guard
against compilers that allow to reference undefined functions at compile
time.
I am not sure about "comparable", but anyway you can make PR with a fix or at least make an issue in my tracker and I fix it.
No LLVM detection.
Sure! Because my code base still on postgres 10. After all words about new build system and cmake here, I have no plan to support not released versions. I am not a masochist...
Regards,
Hello, Yuriy. You wrote: YZ> (2) it might make things easier on Windows, YZ> which could be a sufficiently good reason but I don't think I've seen YZ> anyone explain exactly how much easier it will make things and in what YZ> ways. YZ> 1. You can remove tools/msvc folder because all your build rules YZ> will be universal. (cmake build now have much fewer lines of code) YZ> 2. You can forget about terminal in Windows (for windows guys it's important) YZ> 3. You can normally check environment on Windows, right now we YZ> have hardcoded headers and many options. Configure process will be same on all platforms. YZ> 4. You can generate not only GNU Make or MSVC project, you also YZ> can make Xcode projects, Ninja or NMake for build under MSVC Make. YZ> For Windows, you also can easily change MSVC to Clang it's not hardcoded at all. YZ> 5. With CMake you have an easy way to build extra modules YZ> (plugins), I have already working prototype for windows PGXS. A YZ> plugin should just include .cmake file generated with Postgres build. YZ> Example: YZ> https://github.com/stalkerg/postgres_cmake/blob/cmake/contrib/adminpack/CMakeLists.txt YZ> If PGXS is True it's mean we build module outside postgres. Cool! Thanks for pointing this out. I just had problems building PG extensions for Windows. So I switched to MSYS2 and only then I managed that. No chance for MSVC :( YZ> But in my opinion, you should just try CMake to figure out all benefits. >> we can't judge whether they do without a clear explanation of what the gains will be YZ> I think it's not that thing what easy to explain. Main benefits YZ> not in unix console area and C language... -- With best wishes, Pavel mailto:pavel@gf.microolap.com
On Thu, May 3, 2018 at 12:32:39AM +0200, Catalin Iacob wrote: > I do have a real concern about the long term attractiveness of the > project to new developers, especially younger ones as time passes. > It's not a secret that people will just avoid creaky old projects, and > for Postgres old out of fashion things do add up: autoconf, raw make, > Perl for tests, C89, old platform support. I have no doubt that the > project is already loosing competent potential developers due to this. > One can say this is superficial and those developers should look at > the important things but that does not change reality that some will > just say pass because of dislike of the old technologies I mentioned. > Personally, I can say that if the project were still in CVS I would > probably not bother as I just don't have energy to learn an inferior > old version control system especially as I see version control as > fundamental to a developer. I don't feel the balance between > recruiting new developers and end user benefits tilted enough to > replace the build system but maybe in some years that will be the > case. What percentage of our adoption decline from new developers is based on our build system, and how much of it is based on the fact we use the C language? -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On May 17, 2018 7:44:44 PM PDT, Bruce Momjian <bruce@momjian.us> wrote: >On Thu, May 3, 2018 at 12:32:39AM +0200, Catalin Iacob wrote: >> I do have a real concern about the long term attractiveness of the >> project to new developers, especially younger ones as time passes. >> It's not a secret that people will just avoid creaky old projects, >and >> for Postgres old out of fashion things do add up: autoconf, raw make, >> Perl for tests, C89, old platform support. I have no doubt that the >> project is already loosing competent potential developers due to >this. >> One can say this is superficial and those developers should look at >> the important things but that does not change reality that some will >> just say pass because of dislike of the old technologies I mentioned. >> Personally, I can say that if the project were still in CVS I would >> probably not bother as I just don't have energy to learn an inferior >> old version control system especially as I see version control as >> fundamental to a developer. I don't feel the balance between >> recruiting new developers and end user benefits tilted enough to >> replace the build system but maybe in some years that will be the >> case. > >What percentage of our adoption decline from new developers is based on >our build system, and how much of it is based on the fact we use the C >language? I think neither is as strong a factor as our weird procedures and slow review. People are used to github pull requests, workingbug trackers, etc. I do think that using more modern C or a reasonable subset of C++would make things easier. Don'tthink there's really an alternative there quite yet. Andres Andres -- Sent from my Android device with K-9 Mail. Please excuse my brevity.
On 18 May 2018 at 11:10, Andres Freund <andres@anarazel.de> wrote:
On May 17, 2018 7:44:44 PM PDT, Bruce Momjian <bruce@momjian.us> wrote:
>On Thu, May 3, 2018 at 12:32:39AM +0200, Catalin Iacob wrote:
>> I do have a real concern about the long term attractiveness of the
>> project to new developers, especially younger ones as time passes.
>> It's not a secret that people will just avoid creaky old projects,
>and
>> for Postgres old out of fashion things do add up: autoconf, raw make,
>> Perl for tests, C89, old platform support. I have no doubt that the
>> project is already loosing competent potential developers due to
>this.
>> One can say this is superficial and those developers should look at
>> the important things but that does not change reality that some will
>> just say pass because of dislike of the old technologies I mentioned.
>> Personally, I can say that if the project were still in CVS I would
>> probably not bother as I just don't have energy to learn an inferior
>> old version control system especially as I see version control as
>> fundamental to a developer. I don't feel the balance between
>> recruiting new developers and end user benefits tilted enough to
>> replace the build system but maybe in some years that will be the
>> case.
>
>What percentage of our adoption decline from new developers is based on
>our build system, and how much of it is based on the fact we use the C
>language?
I think neither is as strong a factor as our weird procedures and slow review. People are used to github pull requests, working bug trackers, etc. I do think that using more modern C or a reasonable subset of C++would make things easier. Don't think there's really an alternative there quite yet.
+10.
Also - mailing lists. We're an ageing community and a lot of younger people just don't like or use mailing lists, let alone like to work *only* on mailing lists without forums, issue trackers, etc etc.
I happen to be pretty OK with the status quo, but it's definitely harder to get involved casually or as a new participant. OTOH, that helps cut down the noise level of crap suggestions and terrible patches a little bit, which matters when we have limited review bandwidth.
Then there's the Windows build setup - you can't just fire up Visual Studio and start learning the codebase.
We also have what seems like half an OS worth of tooling to support our shared-nothing-by-default multi-processing model. Custom spinlocks, our LWLocks, our latches, signal based IPC + ProcSignal signal multiplexing, extension shmem reservation and allocation, DSM, DSA, longjmp based exception handling and unwinding ... the learning curve for PostgreSQL programming is a whole lot more than just C even before you get into the DB-related bits. And there's not a great deal of help with the learning curve.
I keep wanting to write some blogs and docs on relevant parts, but you know how it is with time.
The only part that build system changes would help with would be getting Windows/VS and OSX/XCode users started a little more easily. Which wouldn't help tons when they looked at our code and went "WTF, where do I find out what any of this stuff even is?".
(Yes, I know there are some good READMEs already, but often you need to understand quite a bit of the system before you can understand the READMEs...)
2018-05-18 5:50 GMT+02:00 Craig Ringer <craig@2ndquadrant.com>:
On 18 May 2018 at 11:10, Andres Freund <andres@anarazel.de> wrote:
On May 17, 2018 7:44:44 PM PDT, Bruce Momjian <bruce@momjian.us> wrote:
>On Thu, May 3, 2018 at 12:32:39AM +0200, Catalin Iacob wrote:
>> I do have a real concern about the long term attractiveness of the
>> project to new developers, especially younger ones as time passes.
>> It's not a secret that people will just avoid creaky old projects,
>and
>> for Postgres old out of fashion things do add up: autoconf, raw make,
>> Perl for tests, C89, old platform support. I have no doubt that the
>> project is already loosing competent potential developers due to
>this.
>> One can say this is superficial and those developers should look at
>> the important things but that does not change reality that some will
>> just say pass because of dislike of the old technologies I mentioned.
>> Personally, I can say that if the project were still in CVS I would
>> probably not bother as I just don't have energy to learn an inferior
>> old version control system especially as I see version control as
>> fundamental to a developer. I don't feel the balance between
>> recruiting new developers and end user benefits tilted enough to
>> replace the build system but maybe in some years that will be the
>> case.
>
>What percentage of our adoption decline from new developers is based on
>our build system, and how much of it is based on the fact we use the C
>language?
I think neither is as strong a factor as our weird procedures and slow review. People are used to github pull requests, working bug trackers, etc. I do think that using more modern C or a reasonable subset of C++would make things easier. Don't think there's really an alternative there quite yet.+10.Also - mailing lists. We're an ageing community and a lot of younger people just don't like or use mailing lists, let alone like to work *only* on mailing lists without forums, issue trackers, etc etc.I happen to be pretty OK with the status quo, but it's definitely harder to get involved casually or as a new participant. OTOH, that helps cut down the noise level of crap suggestions and terrible patches a little bit, which matters when we have limited review bandwidth.Then there's the Windows build setup - you can't just fire up Visual Studio and start learning the codebase.We also have what seems like half an OS worth of tooling to support our shared-nothing-by-default multi-processing model. Custom spinlocks, our LWLocks, our latches, signal based IPC + ProcSignal signal multiplexing, extension shmem reservation and allocation, DSM, DSA, longjmp based exception handling and unwinding ... the learning curve for PostgreSQL programming is a whole lot more than just C even before you get into the DB-related bits. And there's not a great deal of help with the learning curve.I keep wanting to write some blogs and docs on relevant parts, but you know how it is with time.The only part that build system changes would help with would be getting Windows/VS and OSX/XCode users started a little more easily. Which wouldn't help tons when they looked at our code and went "WTF, where do I find out what any of this stuff even is?".
+1
I have to maintain Orafce and plpgsql_check for MS Windows and it is not nice work
Pavel
(Yes, I know there are some good READMEs already, but often you need to understand quite a bit of the system before you can understand the READMEs...)--
Hi, On 2018-05-18 11:50:47 +0800, Craig Ringer wrote: > Also - mailing lists. We're an ageing community and a lot of younger people > just don't like or use mailing lists, let alone like to work *only* on > mailing lists without forums, issue trackers, etc etc. Can't see getting rid of those entirely. None of the github style platforms copes with reasonable complex discussions. > We also have what seems like half an OS worth of tooling to support our > shared-nothing-by-default multi-processing model. Custom spinlocks, our > LWLocks, our latches, signal based IPC + ProcSignal signal multiplexing, > extension shmem reservation and allocation, DSM, DSA, longjmp based > exception handling and unwinding ... the learning curve for PostgreSQL > programming is a whole lot more than just C even before you get into the > DB-related bits. And there's not a great deal of help with the learning > curve. A good chunk of that we'd probably have anyway. Even with threads we'd likely have our own spinlocks, lwlocks, latches, signal handling, explict shared memory (for hugepages). I think having a decently performant DB will always imply a lot of "OS like" infrastructure. I agree very much on exception handling weirdness - proper language level exceptions are way much easier to handle, and could offer ease of use (no volatiles!) and a lot more flexibility (say throwing errors which signal that no DB level activity happened). I actually don't think the earlier category is as painful as our idiosyncracies around C's weaknesses. Lists, Node based types, dynahash etc. are hard to avoid and failure prone. Greetings, Andres Freund
On Thu, May 17, 2018 at 10:42:00PM -0700, Andres Freund wrote: > > We also have what seems like half an OS worth of tooling to support our > > shared-nothing-by-default multi-processing model. Custom spinlocks, our > > LWLocks, our latches, signal based IPC + ProcSignal signal multiplexing, > > extension shmem reservation and allocation, DSM, DSA, longjmp based > > exception handling and unwinding ... the learning curve for PostgreSQL > > programming is a whole lot more than just C even before you get into the > > DB-related bits. And there's not a great deal of help with the learning > > curve. > > A good chunk of that we'd probably have anyway. Even with threads we'd > likely have our own spinlocks, lwlocks, latches, signal handling, > explict shared memory (for hugepages). I think having a decently > performant DB will always imply a lot of "OS like" infrastructure. > I think threading would definitely make server programming harder. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
I suppose I can make summary after reading all this:
1. Any change in the development process will be possible if it will be convenient for each key developers personally only. (development process include build system)
2. Currently, almost all key developers use Unix like systems, they have strong old school C experience and current build system very comfortable for them.
I think new build system will be possible only by next reasons:
1. Autotools will be completely deprecated and unsupported.
2. Key developers will be changed by people with another experience and habits (and maybe younger).
I don't want to be CMake advocate here, and I see some problems with CMake to Postgres project too. But I want to make Postgres development more comfortable for people like me who also doesn't like mail lists and was growing with github. Unfortunately, we are too few here to change anything now.
Can't see getting rid of those entirely. None of the github style
platforms copes with reasonable complex discussions.
I disagree. A good example of complex discussions on github is Rust language tracker for RFCs:
and one concrete example: https://github.com/rust-lang/rfcs/issues/2327
I have no any problem with complex discussions on github.
Anyway, it's much better than tons of emails in your mailbox without tags and status of discussion.
On Monday, May 28, 2018 4:37:06 AM CEST Yuriy Zhuravlev wrote: > > Can't see getting rid of those entirely. None of the github style > > platforms copes with reasonable complex discussions. > > I disagree. A good example of complex discussions on github is Rust > language tracker for RFCs: > https://github.com/rust-lang/rfcs/issues > and one concrete example: https://github.com/rust-lang/rfcs/issues/2327 > I have no any problem with complex discussions on github. It is indeed hard to follow on github, and would be even worse with bigger threads. Email readers show threads in a hierarchical way, we can see who answered to who, discussions can fork to completely different aspects of an issue without being mixed together. > Anyway, it's much better than tons of emails in your mailbox without tags > and status of discussion. A github thread does not show what I read / what I have to read, does it now ?
пн, 28 мая 2018 г. в 16:42, Pierre Ducroquet <p.psql@pinaraf.info>:
On Monday, May 28, 2018 4:37:06 AM CEST Yuriy Zhuravlev wrote:
> > Can't see getting rid of those entirely. None of the github style
> > platforms copes with reasonable complex discussions.
>
> I disagree. A good example of complex discussions on github is Rust
> language tracker for RFCs:
> https://github.com/rust-lang/rfcs/issues
> and one concrete example: https://github.com/rust-lang/rfcs/issues/2327
> I have no any problem with complex discussions on github.
It is indeed hard to follow on github, and would be even worse with bigger
threads.
Email readers show threads in a hierarchical way, we can see who answered to
who, discussions can fork to completely different aspects of an issue without
being mixed together.
Anyway I have no this feature on GMail web interface. But yes, sometimes it's usefull.
On github you can make new issue and move some messages, it shoud be done by moderator.
> Anyway, it's much better than tons of emails in your mailbox without tags
> and status of discussion.
A github thread does not show what I read / what I have to read, does it now ?
On github you have notifications about new messages in subsribed issues, and if you follow links from https://github.com/notifications these links disappear.
Also, don't forget about browser bookmarks and other plugins for that, web much more flexible than emails.
On 28 May 2018 at 16:02, Yuriy Zhuravlev <stalkerg@gmail.com> wrote:
пн, 28 мая 2018 г. в 16:42, Pierre Ducroquet <p.psql@pinaraf.info>:On Monday, May 28, 2018 4:37:06 AM CEST Yuriy Zhuravlev wrote:
> > Can't see getting rid of those entirely. None of the github style
> > platforms copes with reasonable complex discussions.
>
> I disagree. A good example of complex discussions on github is Rust
> language tracker for RFCs:
> https://github.com/rust-lang/rfcs/issues
> and one concrete example: https://github.com/rust-lang/rfcs/issues/2327
> I have no any problem with complex discussions on github.
It is indeed hard to follow on github, and would be even worse with bigger
threads.
Email readers show threads in a hierarchical way, we can see who answered to
who, discussions can fork to completely different aspects of an issue without
being mixed together.Anyway I have no this feature on GMail web interface. But yes, sometimes it's usefull.On github you can make new issue and move some messages, it shoud be done by moderator.
> Anyway, it's much better than tons of emails in your mailbox without tags
> and status of discussion.
A github thread does not show what I read / what I have to read, does it now ?On github you have notifications about new messages in subsribed issues, and if you follow links from https://github.com/notifications these links disappear. Also, don't forget about browser bookmarks and other plugins for that, web much more flexible than emails.
FWIW, I don't agree with your conclusions re the build system. It's more than just a bunch of conservative dinosaurs not wanting to change how they do anything, though you can frame it that way if you like. It's that a change needs to offer really compelling benefits, and I don't think enough people are convinced of those benefits.
--
On Mon, May 28, 2018, 10:03 Yuriy Zhuravlev <stalkerg@gmail.com> wrote:
пн, 28 мая 2018 г. в 16:42, Pierre Ducroquet <p.psql@pinaraf.info>:On Monday, May 28, 2018 4:37:06 AM CEST Yuriy Zhuravlev wrote:
> > Can't see getting rid of those entirely. None of the github style
> > platforms copes with reasonable complex discussions.
>
> I disagree. A good example of complex discussions on github is Rust
> language tracker for RFCs:
> https://github.com/rust-lang/rfcs/issues
> and one concrete example: https://github.com/rust-lang/rfcs/issues/2327
> I have no any problem with complex discussions on github.
It is indeed hard to follow on github, and would be even worse with bigger
threads.
Email readers show threads in a hierarchical way, we can see who answered to
who, discussions can fork to completely different aspects of an issue without
being mixed together.Anyway I have no this feature on GMail web interface. But yes, sometimes it's usefull.
It is correct that Gmail is incapable of this in the web browser. Many other email systems can though, and Gmail still speaks imap so you can use those if you prefer.
Which outlines a huge advantage of email as the communications medium. This allows each and every person to pick a tool and interface that suits them. Some prefer Gmail web, others mutt, others gnus. And they all work.
With something like github issues everybody is forced to use the same, more limited, interface,with no choice in the matter.
> Anyway, it's much better than tons of emails in your mailbox without tags
> and status of discussion.
A github thread does not show what I read / what I have to read, does it now ?On github you have notifications about new messages in subsribed issues, and if you follow links from https://github.com/notifications these links disappear.
This works similar to unread threads in most mail programs, including Gmail, doesn't it? And the subscribed issue functionality you can easily get in Gmail by using starred threads for example?
And the read/unread can be handled both on a thread basis and individual message basis depending on preference, but with issues it's only on thread level.
Also, don't forget about browser bookmarks and other plugins for that, web much more flexible than emails.
I would argue the exact opposite - mail is a lot more flexible than using github issues and that's one of the most important reasons I prefer it.
(and there are of course many ways to tag and categorize your email, many more so than with issues. Specifically bookmarks will of course depend on your mail program)
There are definitely advantages with issues for tracking, such as getting a more structured central repository of them (but that requires very strict rules for how to apply them and huge amounts of maintenance effort), but as a communications tool I'd say email is vastly superior, particularly thanks to the flexibility.
/Magnus
It's more than just a bunch of conservative dinosaurs not wanting to change how they do anything,
I didn't talk that.
It's that a change needs to offer really compelling benefits
Because of this benefits depend on your development style and your habits.
For me for example, simple CMake syntax and possible to make projects files for IDEs on many platform really compelling benefits.
If you love Perl's like syntax, work in Vim/Emacs under Linux you will have an only extra problem with moderns build systems. (it's not bad it's just how it work)
It is correct that Gmail is incapable of this in the web browser. Many other email systems can though, and Gmail still speaks imap so you can use those if you prefer.
Mail programs outside web browser not popular anymore and this standalone programs became very slow to grow (for example Thunderbird).
I am really don't want to install anything if GMail covers 99% of my usage. We back to personal habits again...
Also, github have API and you can make your own App with a custom interface.
(and there are of course many ways to tag and categorize your email, many more so than with issues. Specifically bookmarks will of course depend on your mail program)
but you should do it by yourself, it`s too many works. On github moderator can put tags from the pre-prepared list, I think it's better.
but as a communications tool I'd say email is vastly superior, particularly thanks to the flexibility.
I agree, you right but I told about another type of flexible. From your type of flexible, we have some sort of problems.
For example, complicated to make a good search over the archives. It's a tradeoff and by my impression GitHub like way better but it's again personal impression.
PS on github you can edit your message and you have highlighted for source code...
On Mon, May 28, 2018 at 11:16:08AM +0200, Magnus Hagander wrote: > I would argue the exact opposite - mail is a lot more flexible than using > github issues and that's one of the most important reasons I prefer it. > > (and there are of course many ways to tag and categorize your email, many more > so than with issues. Specifically bookmarks will of course depend on your mail > program) > > There are definitely advantages with issues for tracking, such as getting a > more structured central repository of them (but that requires very strict rules > for how to apply them and huge amounts of maintenance effort), but as a > communications tool I'd say email is vastly superior, particularly thanks to > the flexibility. It might be that occasional uses would find github easier, and more invested users would find email easier. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
On Mon, May 28, 2018 at 4:23 PM, Bruce Momjian <bruce@momjian.us> wrote: > On Mon, May 28, 2018 at 11:16:08AM +0200, Magnus Hagander wrote: >> I would argue the exact opposite - mail is a lot more flexible than using >> github issues and that's one of the most important reasons I prefer it. >> >> (and there are of course many ways to tag and categorize your email, many more >> so than with issues. Specifically bookmarks will of course depend on your mail >> program) >> >> There are definitely advantages with issues for tracking, such as getting a >> more structured central repository of them (but that requires very strict rules >> for how to apply them and huge amounts of maintenance effort), but as a >> communications tool I'd say email is vastly superior, particularly thanks to >> the flexibility. > > It might be that occasional uses would find github easier, and more > invested users would find email easier. > How do people get to be invested developers, though? Everybody starts as a more occasional user. I started out with one smallish patch for 7.4 and never intended at that stage to do much more. (So much for prescience.) The older I get the more I am prepared to admit that my preferred way to do things might not suit those younger than me. Craig is right, our processes do not make newcomers, especially younger newcomers, feel very comfortable. He's also right that the build system is among the least of our problems in making newcomers feel comfortable. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
First, I apologize if my words hurt someone. I didn't want this.
Second, I totally agree with Andrew.
He's also right that the build system is among the
least of our problems in making newcomers feel comfortable.
This what I wanted to say. Not big technical difference between build systems for results. You can build executable and libraries for each needed platforms use any of it.
The main difference in comfort for that and degree of comfort depending on your development style, experience, and platform.
If you use Windows, or IDEs like XCode, or you don't know bash, m4, sed, grep, perl, Makefile... working with Postgres as a developer will be uncomfortable.
CMake can bring a similar experience of using for each platform. That's it.
(and maybe cost to support your build system)
PS I know all this technology and use usually Linux but CMake or Meson still looks more comfortable for me.
On Mon, 28 May 2018 at 03:30, Yuriy Zhuravlev <stalkerg@gmail.com> wrote:
I suppose I can make summary after reading all this:1. Any change in the development process will be possible if it will be convenient for each key developers personally only. (development process include build system)2. Currently, almost all key developers use Unix like systems, they have strong old school C experience and current build system very comfortable for them.I think new build system will be possible only by next reasons:1. Autotools will be completely deprecated and unsupported.2. Key developers will be changed by people with another experience and habits (and maybe younger).I don't want to be CMake advocate here, and I see some problems with CMake to Postgres project too. But I want to make Postgres development more comfortable for people like me who also doesn't like mail lists and was growing with github. Unfortunately, we are too few here to change anything now.
If we were starting out a new project, would we choose the tools and environments we have now? Probably not. Is it worth spending thousands of person-hours converting what we have into something different that happens to be de rigeur, and (especially) using up many hours of our precious core developer time while they learn the new methods, while not actually gaining functionality? Also, probably not.
The core developers are core developers because they have been involved with postgres for years. Yes, to a certain extent that's a respect thing, they've earned the right to be part of the core team, but it's also related to the fact that they're likely to be around moving forward.
Someone has to maintain and manage these things. With the greatest of respect - I'm sure you have the best of intentions and would be happy to put in many person-hours changing the build environment and helping everyone through the change process - life has a habit of overtaking our best intentions. Who's to know whether you'll still be involved in Postgres in 5 years' time?
Geoff
Is it worth spending thousands of person-hours converting what we have into something different that happens to be de rigeur, and (especially) using up many hours of our precious core developer time while they learn the new methods, while not actually gaining functionality? Also, probably not.
Unfortunately for me I have laready spend many hourse to replace build system by CMake and it's working fine for most cases (even for Power8 AIX and Spac Solaris).
Already now you can build postgres for windows without terminal at all. At least I want to find good use for this development.
On Tue, 29 May 2018 at 11:42, Yuriy Zhuravlev <stalkerg@gmail.com> wrote:
Is it worth spending thousands of person-hours converting what we have into something different that happens to be de rigeur, and (especially) using up many hours of our precious core developer time while they learn the new methods, while not actually gaining functionality? Also, probably not.Unfortunately for me I have laready spend many hourse to replace build system by CMake and it's working fine for most cases (even for Power8 AIX and Spac Solaris).Already now you can build postgres for windows without terminal at all. At least I want to find good use for this development.
There is a good use for it: you use it and are happy with it. You don't have to support any users for whom it doesn't work well, other users don't have to spend their time on it.
I appreciate that it's nice to have everyone use your code, but sometimes people don't want to. There are many projects I've spent weeks developing code for a feature that I thought was the business, only to have the maintainer and/or users say "no thanks": that's the spirit of open source. Arguing with people and telling them that they're wrong or (as you appear to be doing) that they're old and out of touch isn't going to make them any more likely to want to use your code.
I notice that you ignored the two other paragraphs of my email. I appreciate that you're finding this frustrating but selectively picking like that rarely helps a discussion progress beyond point-scoring.
Geoff
Arguing with people and telling them that they're wrong or (as you appear to be doing) that they're old and out of touch isn't going to make them any more likely to want to use your code.
You are totally wrong, I didn't it, especial called somebody "old".
Perhaps you think like this because I wrote next:
Key developers will be changed by people with another experience and habits (and maybe younger).
In this context under "younger" I mean different experience are getting now a new generation of developers. They don't know "news", "mailists", "bbc", "fidonet", "IRC", "perl", "C" and true terminals, but they know "github", "python", "C++", "Java", "gitter", "discord", "JS".
It's all this not a bad it's just obvious difference. Even more, I think "old" experience better, but all this experience dictates development styles and tools.
There are many projects I've spent weeks developing code for a feature that I thought was the business, only to have the maintainer and/or users say "no thanks": that's the spirit of open source.
I agree, I know it but also I still think change build process really need for Postgres community for growing. At now, I can't see "no thanks".
I notice that you ignored the two other paragraphs of my email. I appreciate that you're finding this frustrating but selectively picking like that rarely helps a discussion progress beyond point-scoring.
Because, I agree with this obvious things and not understand how it tied with our discussion. Looks like you understood my words in wrong key. Sorry for that.
On Wed, 30 May 2018 at 00:51, Yuriy Zhuravlev <stalkerg@gmail.com> wrote:
You are totally wrong, I didn't it, especial called somebody "old".
Then I apologise for misunderstanding your intention. Language/culture barrier perhaps?
Geoff