Thread: C99 compliance for src/port/snprintf.c
I noticed that src/port/snprintf.c still follows the old (pre-C99) semantics for what to return from snprintf() after buffer overrun. This seems bad for a couple of reasons: * It results in potential performance loss in psnprintf and appendPQExpBufferVA, since those have to fly blind about how much to enlarge the buffer before retrying. * Given that we override the system-supplied *printf if it doesn't support the 'z' width modifier, it seems highly unlikely that we are still using any snprintf's with pre-C99 semantics, except when this code is used. So there's not much point in keeping this behavior as a test case to ensure compatibility with such libraries. * When we install snprintf.c, port.h forces it to be used everywhere that includes c.h, including extensions. It seems quite possible that there is extension code out there that assumes C99 snprintf behavior. Such code would work today everywhere except Windows, since that's the only platform made in this century that requires snprintf.c. Between the existing Windows porting hazard and our nearby discussion about using snprintf.c on more platforms, I'd say this is a gotcha waiting to bite somebody. Hence, PFA a patch that changes snprintf.c to follow C99 by counting dropped bytes in the result of snprintf(), including in the case where the passed count is zero. (I also dropped the tests for str == NULL when count isn't zero; that's not permitted by either C99 or SUSv2, so I see no reason for this code to support it. Also, avoid wasting one byte in the local buffer for *fprintf.) I'm almost tempted to think that the reasons above make this a back-patchable bug fix. Comments? regards, tom lane diff --git a/src/port/snprintf.c b/src/port/snprintf.c index a184134..851e2ae 100644 *** a/src/port/snprintf.c --- b/src/port/snprintf.c *************** *** 2,7 **** --- 2,8 ---- * Copyright (c) 1983, 1995, 1996 Eric P. Allman * Copyright (c) 1988, 1993 * The Regents of the University of California. All rights reserved. + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions *************** *** 49,56 **** * SNPRINTF, VSNPRINTF and friends * * These versions have been grabbed off the net. They have been ! * cleaned up to compile properly and support for most of the Single Unix ! * Specification has been added. Remaining unimplemented features are: * * 1. No locale support: the radix character is always '.' and the ' * (single quote) format flag is ignored. --- 50,57 ---- * SNPRINTF, VSNPRINTF and friends * * These versions have been grabbed off the net. They have been ! * cleaned up to compile properly and support for most of the C99 ! * specification has been added. Remaining unimplemented features are: * * 1. No locale support: the radix character is always '.' and the ' * (single quote) format flag is ignored. *************** *** 64,88 **** * 5. Space and '#' flags are not implemented. * * ! * The result values of these functions are not the same across different ! * platforms. This implementation is compatible with the Single Unix Spec: * ! * 1. -1 is returned only if processing is abandoned due to an invalid ! * parameter, such as incorrect format string. (Although not required by ! * the spec, this happens only when no characters have yet been transmitted ! * to the destination.) * ! * 2. For snprintf and sprintf, 0 is returned if str == NULL or count == 0; ! * no data has been stored. * ! * 3. Otherwise, the number of bytes actually transmitted to the destination ! * is returned (excluding the trailing '\0' for snprintf and sprintf). * ! * For snprintf with nonzero count, the result cannot be more than count-1 ! * (a trailing '\0' is always stored); it is not possible to distinguish ! * buffer overrun from exact fit. This is unlike some implementations that ! * return the number of bytes that would have been needed for the complete ! * result string. */ /************************************************************** --- 65,88 ---- * 5. Space and '#' flags are not implemented. * * ! * Historically the result values of sprintf/snprintf varied across platforms. ! * This implementation now follows the C99 standard: * ! * 1. -1 is returned if an error is detected in the format string, or if ! * a write to the target stream fails (as reported by fwrite). Note that ! * overrunning snprintf's target buffer is *not* an error. * ! * 2. For successful writes to streams, the actual number of bytes written ! * to the stream is returned. * ! * 3. For successful sprintf/snprintf, the number of bytes that would have ! * been written to an infinite-size buffer (excluding the trailing '\0') ! * is returned. snprintf will truncate its output to fit in the buffer ! * (ensuring a trailing '\0' unless count == 0), but this is not reflected ! * in the function result. * ! * snprintf buffer overrun can be detected by checking for function result ! * greater than or equal to the supplied count. */ /************************************************************** *************** *** 101,115 **** #undef fprintf #undef printf ! /* Info about where the formatted output is going */ typedef struct { char *bufptr; /* next buffer output position */ char *bufstart; /* first buffer element */ ! char *bufend; /* last buffer element, or NULL */ /* bufend == NULL is for sprintf, where we assume buf is big enough */ FILE *stream; /* eventual output destination, or NULL */ ! int nchars; /* # chars already sent to stream */ bool failed; /* call is a failure; errno is set */ } PrintfTarget; --- 101,127 ---- #undef fprintf #undef printf ! /* ! * Info about where the formatted output is going. ! * ! * dopr and subroutines will not write at/past bufend, but snprintf ! * reserves one byte, ensuring it may place the trailing '\0' there. ! * ! * In snprintf, we use nchars to count the number of bytes dropped on the ! * floor due to buffer overrun. The correct result of snprintf is thus ! * (bufptr - bufstart) + nchars. (This isn't as inconsistent as it might ! * seem: nchars is the number of emitted bytes that are not in the buffer now, ! * either because we sent them to the stream or because we couldn't fit them ! * into the buffer to begin with.) ! */ typedef struct { char *bufptr; /* next buffer output position */ char *bufstart; /* first buffer element */ ! char *bufend; /* last+1 buffer element, or NULL */ /* bufend == NULL is for sprintf, where we assume buf is big enough */ FILE *stream; /* eventual output destination, or NULL */ ! int nchars; /* # chars sent to stream, or dropped */ bool failed; /* call is a failure; errno is set */ } PrintfTarget; *************** int *** 147,163 **** pg_vsnprintf(char *str, size_t count, const char *fmt, va_list args) { PrintfTarget target; ! if (str == NULL || count == 0) ! return 0; target.bufstart = target.bufptr = str; target.bufend = str + count - 1; target.stream = NULL; ! /* target.nchars is unused in this case */ target.failed = false; dopr(&target, fmt, args); *(target.bufptr) = '\0'; ! return target.failed ? -1 : (target.bufptr - target.bufstart); } int --- 159,186 ---- pg_vsnprintf(char *str, size_t count, const char *fmt, va_list args) { PrintfTarget target; + char onebyte[1]; ! /* ! * C99 allows the case str == NULL when count == 0. Rather than ! * special-casing this situation further down, we substitute a one-byte ! * local buffer. Callers cannot tell, since the function result doesn't ! * depend on count. ! */ ! if (count == 0) ! { ! str = onebyte; ! count = 1; ! } target.bufstart = target.bufptr = str; target.bufend = str + count - 1; target.stream = NULL; ! target.nchars = 0; target.failed = false; dopr(&target, fmt, args); *(target.bufptr) = '\0'; ! return target.failed ? -1 : (target.bufptr - target.bufstart ! + target.nchars); } int *************** pg_vsprintf(char *str, const char *fmt, *** 177,192 **** { PrintfTarget target; - if (str == NULL) - return 0; target.bufstart = target.bufptr = str; target.bufend = NULL; target.stream = NULL; ! /* target.nchars is unused in this case */ target.failed = false; dopr(&target, fmt, args); *(target.bufptr) = '\0'; ! return target.failed ? -1 : (target.bufptr - target.bufstart); } int --- 200,214 ---- { PrintfTarget target; target.bufstart = target.bufptr = str; target.bufend = NULL; target.stream = NULL; ! target.nchars = 0; /* not really used in this case */ target.failed = false; dopr(&target, fmt, args); *(target.bufptr) = '\0'; ! return target.failed ? -1 : (target.bufptr - target.bufstart ! + target.nchars); } int *************** pg_vfprintf(FILE *stream, const char *fm *** 213,219 **** return -1; } target.bufstart = target.bufptr = buffer; ! target.bufend = buffer + sizeof(buffer) - 1; target.stream = stream; target.nchars = 0; target.failed = false; --- 235,241 ---- return -1; } target.bufstart = target.bufptr = buffer; ! target.bufend = buffer + sizeof(buffer); /* use the whole buffer */ target.stream = stream; target.nchars = 0; target.failed = false; *************** flushbuffer(PrintfTarget *target) *** 256,261 **** --- 278,287 ---- { size_t nc = target->bufptr - target->bufstart; + /* + * Don't write anything if we already failed; this is to ensure we + * preserve the original failure's errno. + */ if (!target->failed && nc > 0) { size_t written; *************** dostr(const char *str, int slen, PrintfT *** 1035,1041 **** { /* buffer full, can we dump to stream? */ if (target->stream == NULL) ! return; /* no, lose the data */ flushbuffer(target); continue; } --- 1061,1070 ---- { /* buffer full, can we dump to stream? */ if (target->stream == NULL) ! { ! target->nchars += slen; /* no, lose the data */ ! return; ! } flushbuffer(target); continue; } *************** dopr_outch(int c, PrintfTarget *target) *** 1054,1060 **** { /* buffer full, can we dump to stream? */ if (target->stream == NULL) ! return; /* no, lose the data */ flushbuffer(target); } *(target->bufptr++) = c; --- 1083,1092 ---- { /* buffer full, can we dump to stream? */ if (target->stream == NULL) ! { ! target->nchars++; /* no, lose the data */ ! return; ! } flushbuffer(target); } *(target->bufptr++) = c;
On Tue, Aug 14, 2018 at 7:28 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > I'm almost tempted to think that the reasons above make this a > back-patchable bug fix. Comments? No objections to changing the behavior. Have you checked whether there are any noticeable performance consequences? Back-patching seems a bit aggressive to me considering that the danger is hypothetical. I'd want to have some tangible evidence that back-patching was going help somebody. For all we know somebody's got an extension which they only use on Windows that happens to be relying on the current behavior, although more likely still (IMHO) is that it there is little or no code relying on either behavior. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes: > On Tue, Aug 14, 2018 at 7:28 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> I'm almost tempted to think that the reasons above make this a >> back-patchable bug fix. Comments? > No objections to changing the behavior. Have you checked whether > there are any noticeable performance consequences? Hard to believe there could be any performance loss. The normal (non buffer overflowing) code path gets two additional instructions, storing zero into .nchars and then adding it to the result. There would be more work added to buffer-overflow cases, but we'd surely more than win that back by avoiding repeated repalloc-and-try-again cycles. > Back-patching seems a bit aggressive to me considering that the danger > is hypothetical. I'd want to have some tangible evidence that > back-patching was going help somebody. For all we know somebody's got > an extension which they only use on Windows that happens to be relying > on the current behavior, although more likely still (IMHO) is that it > there is little or no code relying on either behavior. Yeah, it's theoretically possible that there's somebody out there who's depending on the pre-C99 behavior and never tested their code anywhere but Windows. But come on, how likely is that really, compared to the much more plausible risks I already cited? It's not even that easy to imagine how someone would construct code that had such a dependency while not being outright buggy in the face of buffer overrun. BTW, independently of whether to back-patch, it strikes me that what we ought to do in HEAD (after applying this) is to just assume we have C99-compliant behavior, and rip out the baroque logic in psnprintf and appendPQExpBufferVA that tries to deal with pre-C99 snprintf. I don't expect that'd save any really meaningful number of cycles, but at least it'd buy back the two added instructions mentioned above. I suppose we could put in a configure check that verifies whether the system snprintf returns the right value for overrun cases, though it's hard to believe there are any platforms that pass the 'z' check and would fail this one. regards, tom lane
Hi, On 2018-08-15 11:41:46 -0400, Tom Lane wrote: > BTW, independently of whether to back-patch, it strikes me that what > we ought to do in HEAD (after applying this) is to just assume we have > C99-compliant behavior, and rip out the baroque logic in psnprintf > and appendPQExpBufferVA that tries to deal with pre-C99 snprintf. > I don't expect that'd save any really meaningful number of cycles, > but at least it'd buy back the two added instructions mentioned above. > I suppose we could put in a configure check that verifies whether > the system snprintf returns the right value for overrun cases, though > it's hard to believe there are any platforms that pass the 'z' check > and would fail this one. We could just mandate C99, more generally. /me goes and hides in a bush.
On 2018-Aug-15, Robert Haas wrote: > On Tue, Aug 14, 2018 at 7:28 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > > I'm almost tempted to think that the reasons above make this a > > back-patchable bug fix. Comments? > > No objections to changing the behavior. Have you checked whether > there are any noticeable performance consequences? > > Back-patching seems a bit aggressive to me considering that the danger > is hypothetical. That was my first thought too, and my preferred path would be to make this master-only and only consider a backpatch later if we find some practical reason to do so. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Aug 15, 2018 at 11:52 AM, Andres Freund <andres@anarazel.de> wrote: > We could just mandate C99, more generally. > > /me goes and hides in a bush. It's hard to believe that would cost much. Personally, I'd prefer to continue avoiding // comments and intermingled declarations of variables and code on grounds of style and readability. But it's kind of difficult to believe that we really need to worry about people still running 20-year old compilers very much. My first exposure to Tom arguing that we ought to continue supporting C89 was about a decade ago, but the argument that people might still have older systems in service was a lot more plausible then than it is now. BTW, I think a bush is probably not a nearly sufficient place to hide. The wrath of Tom will find you wherever you may go... :-) -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Greetings, * Andres Freund (andres@anarazel.de) wrote: > On 2018-08-15 11:41:46 -0400, Tom Lane wrote: > > BTW, independently of whether to back-patch, it strikes me that what > > we ought to do in HEAD (after applying this) is to just assume we have > > C99-compliant behavior, and rip out the baroque logic in psnprintf > > and appendPQExpBufferVA that tries to deal with pre-C99 snprintf. > > I don't expect that'd save any really meaningful number of cycles, > > but at least it'd buy back the two added instructions mentioned above. > > I suppose we could put in a configure check that verifies whether > > the system snprintf returns the right value for overrun cases, though > > it's hard to believe there are any platforms that pass the 'z' check > > and would fail this one. > > We could just mandate C99, more generally. *cough* +1 *cough* > /me goes and hides in a bush. /me runs Thanks! Stephen
Attachment
Hi, On 2018-08-15 12:01:28 -0400, Robert Haas wrote: > On Wed, Aug 15, 2018 at 11:52 AM, Andres Freund <andres@anarazel.de> wrote: > > We could just mandate C99, more generally. > > > > /me goes and hides in a bush. > > It's hard to believe that would cost much. Yea. > Personally, I'd prefer to continue avoiding // comments and > intermingled declarations of variables and code on grounds of style > and readability. I don't really care much about either. The calculus for intermingled declarations would personally change the minute that we decided to allow some C++ - allowing for scoped locks etc via RAII - but not earlier. The thing I'd really want is designated initializers for structs. Makes code for statically allocated structs *so* much more readable. And guess who's working on code that adds statically allocated structs with lots of members... > BTW, I think a bush is probably not a nearly sufficient place to hide. > The wrath of Tom will find you wherever you may go... :-) That's why I keep moving. At 200 mph on a train :P. A continent away. Greetings, Andres Freund
Robert Haas <robertmhaas@gmail.com> writes: > On Wed, Aug 15, 2018 at 11:52 AM, Andres Freund <andres@anarazel.de> wrote: >> We could just mandate C99, more generally. >> >> /me goes and hides in a bush. > It's hard to believe that would cost much. I think we have done that, piece by piece, where it was actually buying us something. In particular we've gradually moved the goalposts for *printf compliance, and I'm proposing here to move them a bit further. I'm not sure what "we're going to insist on C99" even means concretely, given this position ... > Personally, I'd prefer to > continue avoiding // comments and intermingled declarations of > variables and code on grounds of style and readability. ... which I agree with. > But it's kind > of difficult to believe that we really need to worry about people > still running 20-year old compilers very much. Sure. It's been a long time since anybody worried about those as optimization targets, for instance. Still, I'm not in favor of actively breaking compatibility unless it buys us something. regards, tom lane
Alvaro Herrera <alvherre@2ndquadrant.com> writes: > On 2018-Aug-15, Robert Haas wrote: >> Back-patching seems a bit aggressive to me considering that the danger >> is hypothetical. > That was my first thought too, and my preferred path would be to make > this master-only and only consider a backpatch later if we find some > practical reason to do so. Meh --- the hazards of back-patching seem to me to be more hypothetical than the benefits. Still, I seem to be in the minority, so I withdraw the proposal to back-patch. regards, tom lane
On 8/15/18 12:17 PM, Tom Lane wrote: > Robert Haas <robertmhaas@gmail.com> writes: > >> Personally, I'd prefer to >> continue avoiding // comments and intermingled declarations of >> variables and code on grounds of style and readability. > > ... which I agree with. We already have -Wdeclaration-after-statement to prevent mixed declarations. Not sure what to do about comments except manual enforcement. >> But it's kind >> of difficult to believe that we really need to worry about people >> still running 20-year old compilers very much. > > Sure. It's been a long time since anybody worried about those as > optimization targets, for instance. Still, I'm not in favor of > actively breaking compatibility unless it buys us something. We use C99 for the pgBackRest project and we've found designated initializers, compound declarations, and especially variadic macros to be very helpful. Only the latter really provides new functionality but simplifying and clarifying code is always a bonus. So, +1 from me! Regards, -- -David david@pgmasters.net
I wrote: > Meh --- the hazards of back-patching seem to me to be more hypothetical > than the benefits. Still, I seem to be in the minority, so I withdraw > the proposal to back-patch. Actually, after digging around a bit, I'm excited about this again. There are only a couple dozen places in our tree that pay any attention to the result of (v)snprintf, but with the exception of psnprintf, appendPQExpBufferVA, and one or two other places, *they're all assuming C99 semantics*, and will fail to detect buffer overflow with the pre-C99 behavior. Probably a lot of these are not live bugs because buffer overrun is not ever going to occur in practice. But at least pg_upgrade and pg_regress are constructing command strings including externally supplied paths, so overrun doesn't seem impossible. If it happened, they'd merrily proceed to execute a truncated command. If we don't backpatch the snprintf change, we're morally obliged to back-patch some other fix for these places. At least one of them, in plperl's pport.h, is not our code and so changing it seems like a bad idea. Still want to argue for no backpatch? regards, tom lane PS: I also found a couple of places that are just wrong regardless of semantics: they're checking overflow by "result > bufsize", not "result >= bufsize". Will fix those in any case.
David Steele <david@pgmasters.net> writes: > On 8/15/18 12:17 PM, Tom Lane wrote: >> Robert Haas <robertmhaas@gmail.com> writes: >>> Personally, I'd prefer to >>> continue avoiding // comments and intermingled declarations of >>> variables and code on grounds of style and readability. >> ... which I agree with. > We already have -Wdeclaration-after-statement to prevent mixed > declarations. Not sure what to do about comments except manual enforcement. pgindent will change // comments to /* style, so at least on the timescale of a release cycle, we have enforcement for that. regards, tom lane
On 08/15/2018 12:17 PM, Tom Lane wrote: > Robert Haas <robertmhaas@gmail.com> writes: > >> Personally, I'd prefer to >> continue avoiding // comments and intermingled declarations of >> variables and code on grounds of style and readability. > ... which I agree with. > > A decade or so ago I would have strongly agreed with you. But the language trend seems to be in the other direction. And there is something to be said for declaration near use without having to use an inner block. I'm not advocating that we change policy, however. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes: > On 08/15/2018 12:17 PM, Tom Lane wrote: >> Robert Haas <robertmhaas@gmail.com> writes: >>> Personally, I'd prefer to >>> continue avoiding // comments and intermingled declarations of >>> variables and code on grounds of style and readability. >> ... which I agree with. > A decade or so ago I would have strongly agreed with you. But the > language trend seems to be in the other direction. And there is > something to be said for declaration near use without having to use an > inner block. I'm not advocating that we change policy, however. FWIW, the issue I've got with what C99 did is that you can narrow the *start* of the scope of a local variable easily, but not the *end* of its scope, which seems to me to be solving at most half of the problem. To solve the whole problem, you end up needing a nested block anyway. I do dearly miss the ability to easily limit the scope of a loop's control variable to just the loop, eg for (int i = 0; ...) { ... } But AFAIK that's C++ not C99. regards, tom lane
On 08/15/2018 03:18 PM, Tom Lane wrote: > > FWIW, the issue I've got with what C99 did is that you can narrow the > *start* of the scope of a local variable easily, but not the *end* of > its scope, which seems to me to be solving at most half of the problem. > To solve the whole problem, you end up needing a nested block anyway. > > I do dearly miss the ability to easily limit the scope of a loop's > control variable to just the loop, eg > > for (int i = 0; ...) { ... } > > But AFAIK that's C++ not C99. > > Agree completely. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 8/15/18 3:18 PM, Tom Lane wrote: > Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes: >> On 08/15/2018 12:17 PM, Tom Lane wrote: >>> Robert Haas <robertmhaas@gmail.com> writes: >>>> Personally, I'd prefer to >>>> continue avoiding // comments and intermingled declarations of >>>> variables and code on grounds of style and readability. > >>> ... which I agree with. > >> A decade or so ago I would have strongly agreed with you. But the >> language trend seems to be in the other direction. And there is >> something to be said for declaration near use without having to use an >> inner block. I'm not advocating that we change policy, however. > > FWIW, the issue I've got with what C99 did is that you can narrow the > *start* of the scope of a local variable easily, but not the *end* of > its scope, which seems to me to be solving at most half of the problem. > To solve the whole problem, you end up needing a nested block anyway. > > I do dearly miss the ability to easily limit the scope of a loop's > control variable to just the loop, eg > > for (int i = 0; ...) { ... } > > But AFAIK that's C++ not C99. This works in C99 -- and I'm a really big fan. -- -David david@pgmasters.net
David Steele <david@pgmasters.net> writes: > On 8/15/18 3:18 PM, Tom Lane wrote: >> I do dearly miss the ability to easily limit the scope of a loop's >> control variable to just the loop, eg >> for (int i = 0; ...) { ... } >> But AFAIK that's C++ not C99. > This works in C99 -- and I'm a really big fan. It does? [ checks standard... ] Oh wow: 6.8.5 Iteration statements Syntax iteration-statement: while ( expression ) statement do statement while ( expression ) ; for ( expr-opt ; expr-opt ; expr-opt ) statement for ( declaration ; expr-opt ; expr-opt ) statement I'd always thought this was only in C++. This alone might be a sufficient reason to drop C89 compiler support ... regards, tom lane
Hi, On 2018-08-15 15:57:43 -0400, Tom Lane wrote: > I'd always thought this was only in C++. This alone might be a sufficient > reason to drop C89 compiler support ... It's also IIRC reasonably widely supported from before C99. So, for the sake of designated initializers, for loop scoping, snprintf, let's do this in master? Greetings, Andres Freund
On 2018-08-15 14:05:29 -0400, Tom Lane wrote: > I wrote: > > Meh --- the hazards of back-patching seem to me to be more hypothetical > > than the benefits. Still, I seem to be in the minority, so I withdraw > > the proposal to back-patch. > > Actually, after digging around a bit, I'm excited about this again. > There are only a couple dozen places in our tree that pay any attention > to the result of (v)snprintf, but with the exception of psnprintf, > appendPQExpBufferVA, and one or two other places, *they're all assuming > C99 semantics*, and will fail to detect buffer overflow with the pre-C99 > behavior. > > Probably a lot of these are not live bugs because buffer overrun is > not ever going to occur in practice. But at least pg_upgrade and > pg_regress are constructing command strings including externally > supplied paths, so overrun doesn't seem impossible. If it happened, > they'd merrily proceed to execute a truncated command. > > If we don't backpatch the snprintf change, we're morally obliged to > back-patch some other fix for these places. At least one of them, > in plperl's pport.h, is not our code and so changing it seems like > a bad idea. > > Still want to argue for no backpatch? > > regards, tom lane > > PS: I also found a couple of places that are just wrong regardless > of semantics: they're checking overflow by "result > bufsize", not > "result >= bufsize". Will fix those in any case. I'm a bit confused. Why did you just backpatch this ~two hours after people objected to the idea? Even if it were during my current work hours, I don't even read mail that often if I'm hacking on something complicated. Greetings, Andres Freund
Andres Freund <andres@anarazel.de> writes: > On 2018-08-15 15:57:43 -0400, Tom Lane wrote: >> I'd always thought this was only in C++. This alone might be a sufficient >> reason to drop C89 compiler support ... > It's also IIRC reasonably widely supported from before C99. So, for the > sake of designated initializers, for loop scoping, snprintf, let's do > this in master? Nitpick: snprintf is an independent concern: that's from the C library, not from the compiler. To drive the point home, I could still test master on "gaur" if I were to install a just-slightly-newer gcc on that machine (its existing gcc installation isn't native either...); but replacing its libc is a nonstarter. Experimenting here says that even reasonably modern gcc's won't take declarations-inside-for without "--std=c99" or such. No idea about other compilers. So we'd have a little bit of work to do on configuration before we could open the floodgates on this. regards, tom lane
Andres Freund <andres@anarazel.de> writes: > On 2018-08-15 14:05:29 -0400, Tom Lane wrote: >> Still want to argue for no backpatch? > I'm a bit confused. Why did you just backpatch this ~two hours after > people objected to the idea? Even if it were during my current work > hours, I don't even read mail that often if I'm hacking on something > complicated. If a consensus emerges to deal with this some other way, reverting isn't hard. But I think it's pretty clear at this point that we're dealing with real bugs versus entirely hypothetical bugs, and that's not a decision that's hard to make IMO. regards, tom lane
Hi, On 2018-08-15 18:13:59 -0400, Tom Lane wrote: > Experimenting here says that even reasonably modern gcc's won't take > declarations-inside-for without "--std=c99" or such. No idea about > other compilers. So we'd have a little bit of work to do on > configuration before we could open the floodgates on this. I think autoconf's magic knows about most of that: — Macro: AC_PROG_CC_C99 If the C compiler is not in C99 mode by default, try to add an option to output variable CC to make it so. This macro tries various options that select C99 on some system or another. It considers the compiler to be in C99 mode if it handles _Bool, flexible arrays, inline, long long int, mixed code and declarations, named initialization of structs, restrict, varargs macros, variable declarations in for loops and variable length arrays. After calling this macro you can check whether the C compiler has been set to accept C99; if not, the shell variable ac_cv_prog_cc_c99 is set to ‘no’. ~ I think we could get a start by adding that test to configure, without relying on it for now (i.e. keeping mylodon with -Wc99-extensions -Werror=c99-extensions alive). That'd tell us about which machines, besides presumably gaur, we'd need to either kick to the curb or change. Greetings, Andres Freund
Andres Freund <andres@anarazel.de> writes: > On 2018-08-15 18:13:59 -0400, Tom Lane wrote: >> Experimenting here says that even reasonably modern gcc's won't take >> declarations-inside-for without "--std=c99" or such. > I think autoconf's magic knows about most of that: > — Macro: AC_PROG_CC_C99 Ah, of course. What about the MSVC build? > I think we could get a start by adding that test to configure, without > relying on it for now (i.e. keeping mylodon with -Wc99-extensions > -Werror=c99-extensions alive). That'd tell us about which machines, > besides presumably gaur, we'd need to either kick to the curb or change. Sure, no objection to putting that in just to see how much of the buildfarm can handle it. If the answer turns out to be "a lot", we might have to reconsider, but gathering data seems like the first thing to do. regards, tom lane
Hi, On 2018-08-15 18:31:10 -0400, Tom Lane wrote: > Andres Freund <andres@anarazel.de> writes: > > On 2018-08-15 18:13:59 -0400, Tom Lane wrote: > >> Experimenting here says that even reasonably modern gcc's won't take > >> declarations-inside-for without "--std=c99" or such. > > > I think autoconf's magic knows about most of that: > > — Macro: AC_PROG_CC_C99 > > Ah, of course. What about the MSVC build? It looks like it mostly just enables that by default. But I only looked cursorily. It's a bit annoying because that makes it harder to be sure which animals support what. Looks like e.g. hammerkop (supposedly msvc 2005) might not support the subset we want; not that I'd loose sleep over raising the minimum msvc in master a bit. > > I think we could get a start by adding that test to configure, without > > relying on it for now (i.e. keeping mylodon with -Wc99-extensions > > -Werror=c99-extensions alive). That'd tell us about which machines, > > besides presumably gaur, we'd need to either kick to the curb or change. > > Sure, no objection to putting that in just to see how much of the > buildfarm can handle it. If the answer turns out to be "a lot", > we might have to reconsider, but gathering data seems like the > first thing to do. Cool. Too late today (in Europe for a few more days), but I'll try to come up with something tomorrow. Greetings, Andres Freund
On Thu, Aug 16, 2018 at 10:40 AM, Andres Freund <andres@anarazel.de> wrote: > Hi, > > On 2018-08-15 18:31:10 -0400, Tom Lane wrote: >> Andres Freund <andres@anarazel.de> writes: >> > On 2018-08-15 18:13:59 -0400, Tom Lane wrote: >> >> Experimenting here says that even reasonably modern gcc's won't take >> >> declarations-inside-for without "--std=c99" or such. >> >> > I think autoconf's magic knows about most of that: >> > — Macro: AC_PROG_CC_C99 >> >> Ah, of course. What about the MSVC build? > > It looks like it mostly just enables that by default. But I only looked > cursorily. It's a bit annoying because that makes it harder to be sure > which animals support what. Looks like e.g. hammerkop (supposedly msvc > 2005) might not support the subset we want; not that I'd loose sleep > over raising the minimum msvc in master a bit. Really? I am not an MSVC user but I had the impression that their C mode (/TC or files named .c) was stuck on C89/C90 as a matter of policy, as Herb Sutter explained here (though maybe the situation has changed since then): https://herbsutter.com/2012/05/03/reader-qa-what-about-vc-and-c99/ That's presumably why cfbot's appveyor build always complains about people declaring variables after statements. To allow that particular language feature, it looks like you have to tell it that your .c file is really a C++ program with /TP. But that opens a separate can of worms, doesn't it? -- Thomas Munro http://www.enterprisedb.com
Thomas Munro <thomas.munro@enterprisedb.com> writes: > Really? I am not an MSVC user but I had the impression that their C > mode (/TC or files named .c) was stuck on C89/C90 as a matter of > policy, as Herb Sutter explained here (though maybe the situation has > changed since then): > https://herbsutter.com/2012/05/03/reader-qa-what-about-vc-and-c99/ Hm, I read that and what it says is that they don't plan to support C99 features that aren't also in C++. But this one surely is. How you turn it on (without enabling all of C++) is not very clear though. regards, tom lane
Hi, On 2018-08-16 10:54:01 +1200, Thomas Munro wrote: > Really? I am not an MSVC user but I had the impression that their C > mode (/TC or files named .c) was stuck on C89/C90 as a matter of > policy, as Herb Sutter explained here (though maybe the situation has > changed since then): They revised their position gradually, starting soon after. They claim full C99 "language" (vs library, which is also pretty complete) compliance now. I've not yet bothered to fully figure out which version supports what however. Nor am I really sure about the whole flag thing, it appears there's a gui element to choose, which we might need to mirror on the xml level. A bit of googling shows https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2013/hh409293(v=vs.120) " Supports these ISO C99 language features: _Bool Compound literals. Designated initializers. Mixing declarations with code. " which I think is what we roughly would want. So it looks like msvc 2013 might be the relevant requirement. > That's presumably why cfbot's appveyor build always complains about > people declaring variables after statements. IIRC even in older versions there's a flag to allow that. > To allow that particular language feature, it looks like you have to > tell it that your .c file is really a C++ program with /TP. But that > opens a separate can of worms, doesn't it? Yea, I don't think we want to go there by default soon. Greetings, Andres Freund
I wrote: > BTW, independently of whether to back-patch, it strikes me that what > we ought to do in HEAD (after applying this) is to just assume we have > C99-compliant behavior, and rip out the baroque logic in psnprintf > and appendPQExpBufferVA that tries to deal with pre-C99 snprintf. Here's a proposed patch for that. It also gets rid of some ancient code that tried to deal with snprintfs that were outright broken, such as writing past the end of the specified buffer. Even if anyone is still using platforms where that's a problem, I'd expect that we'd have rejected the system snprintf thanks to configure's feature checks. regards, tom lane diff --git a/config/c-library.m4 b/config/c-library.m4 index 4446a5b..6bcfcee 100644 *** a/config/c-library.m4 --- b/config/c-library.m4 *************** int main() *** 238,243 **** --- 238,270 ---- AC_MSG_RESULT([$pgac_cv_snprintf_size_t_support]) ])# PGAC_FUNC_SNPRINTF_SIZE_T_SUPPORT + # PGAC_FUNC_SNPRINTF_C99_RESULT + # ----------------------------- + # Determine whether snprintf returns the desired buffer length when + # it overruns the actual buffer length. That's required by C99 and POSIX + # but ancient platforms don't behave that way, so we must test. + # + AC_DEFUN([PGAC_FUNC_SNPRINTF_C99_RESULT], + [AC_MSG_CHECKING([whether snprintf reports buffer overrun per C99]) + AC_CACHE_VAL(pgac_cv_snprintf_c99_result, + [AC_RUN_IFELSE([AC_LANG_SOURCE([[#include <stdio.h> + #include <string.h> + + int main() + { + char buf[10]; + + if (snprintf(buf, sizeof(buf), "12345678901234567890") != 20) + return 1; + return 0; + }]])], + [pgac_cv_snprintf_c99_result=yes], + [pgac_cv_snprintf_c99_result=no], + [pgac_cv_snprintf_c99_result=cross]) + ])dnl AC_CACHE_VAL + AC_MSG_RESULT([$pgac_cv_snprintf_c99_result]) + ])# PGAC_FUNC_SNPRINTF_C99_RESULT + # PGAC_TYPE_LOCALE_T # ------------------ diff --git a/configure b/configure index 067fc43..863d07f 100755 *** a/configure --- b/configure *************** fi *** 15965,15971 **** # Run tests below here # -------------------- ! # Force use of our snprintf if system's doesn't do arg control # See comment above at snprintf test for details. if test "$enable_nls" = yes -a "$pgac_need_repl_snprintf" = no; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether snprintf supports argument control" >&5 --- 15965,15971 ---- # Run tests below here # -------------------- ! # For NLS, force use of our snprintf if system's doesn't do arg control. # See comment above at snprintf test for details. if test "$enable_nls" = yes -a "$pgac_need_repl_snprintf" = no; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether snprintf supports argument control" >&5 *************** $as_echo "$pgac_cv_snprintf_size_t_suppo *** 16259,16264 **** --- 16259,16308 ---- fi fi + # Force use of our snprintf if the system's doesn't handle buffer overrun + # as specified by C99. + if test "$pgac_need_repl_snprintf" = no; then + { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether snprintf reports buffer overrun per C99" >&5 + $as_echo_n "checking whether snprintf reports buffer overrun per C99... " >&6; } + if ${pgac_cv_snprintf_c99_result+:} false; then : + $as_echo_n "(cached) " >&6 + else + if test "$cross_compiling" = yes; then : + pgac_cv_snprintf_c99_result=cross + else + cat confdefs.h - <<_ACEOF >conftest.$ac_ext + /* end confdefs.h. */ + #include <stdio.h> + #include <string.h> + + int main() + { + char buf[10]; + + if (snprintf(buf, sizeof(buf), "12345678901234567890") != 20) + return 1; + return 0; + } + _ACEOF + if ac_fn_c_try_run "$LINENO"; then : + pgac_cv_snprintf_c99_result=yes + else + pgac_cv_snprintf_c99_result=no + fi + rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ + conftest.$ac_objext conftest.beam conftest.$ac_ext + fi + + + fi + { $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_snprintf_c99_result" >&5 + $as_echo "$pgac_cv_snprintf_c99_result" >&6; } + + if test "$pgac_cv_snprintf_c99_result" != yes; then + pgac_need_repl_snprintf=yes + fi + fi + # Now we have checked all the reasons to replace snprintf if test $pgac_need_repl_snprintf = yes; then diff --git a/configure.in b/configure.in index 49257e5..1c529e7 100644 *** a/configure.in --- b/configure.in *************** for the exact reason.]])], *** 1805,1811 **** # Run tests below here # -------------------- ! # Force use of our snprintf if system's doesn't do arg control # See comment above at snprintf test for details. if test "$enable_nls" = yes -a "$pgac_need_repl_snprintf" = no; then PGAC_FUNC_SNPRINTF_ARG_CONTROL --- 1805,1811 ---- # Run tests below here # -------------------- ! # For NLS, force use of our snprintf if system's doesn't do arg control. # See comment above at snprintf test for details. if test "$enable_nls" = yes -a "$pgac_need_repl_snprintf" = no; then PGAC_FUNC_SNPRINTF_ARG_CONTROL *************** if test "$pgac_need_repl_snprintf" = no; *** 1857,1862 **** --- 1857,1871 ---- fi fi + # Force use of our snprintf if the system's doesn't handle buffer overrun + # as specified by C99. + if test "$pgac_need_repl_snprintf" = no; then + PGAC_FUNC_SNPRINTF_C99_RESULT + if test "$pgac_cv_snprintf_c99_result" != yes; then + pgac_need_repl_snprintf=yes + fi + fi + # Now we have checked all the reasons to replace snprintf if test $pgac_need_repl_snprintf = yes; then AC_DEFINE(USE_REPL_SNPRINTF, 1, [Use replacement snprintf() functions.]) diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index f458c0e..9989d3a 100644 *** a/src/backend/utils/misc/guc.c --- b/src/backend/utils/misc/guc.c *************** do_serialize(char **destptr, Size *maxby *** 9441,9466 **** if (*maxbytes <= 0) elog(ERROR, "not enough space to serialize GUC state"); - errno = 0; - va_start(vargs, fmt); n = vsnprintf(*destptr, *maxbytes, fmt, vargs); va_end(vargs); ! /* ! * Cater to portability hazards in the vsnprintf() return value just like ! * appendPQExpBufferVA() does. Note that this requires an extra byte of ! * slack at the end of the buffer. Since serialize_variable() ends with a ! * do_serialize_binary() rather than a do_serialize(), we'll always have ! * that slack; estimate_variable_size() need not add a byte for it. ! */ ! if (n < 0 || n >= *maxbytes - 1) { ! if (n < 0 && errno != 0 && errno != ENOMEM) ! /* Shouldn't happen. Better show errno description. */ ! elog(ERROR, "vsnprintf failed: %m"); ! else ! elog(ERROR, "not enough space to serialize GUC state"); } /* Shift the destptr ahead of the null terminator */ --- 9441,9459 ---- if (*maxbytes <= 0) elog(ERROR, "not enough space to serialize GUC state"); va_start(vargs, fmt); n = vsnprintf(*destptr, *maxbytes, fmt, vargs); va_end(vargs); ! if (n < 0) { ! /* Shouldn't happen. Better show errno description. */ ! elog(ERROR, "vsnprintf failed: %m"); ! } ! if (n >= *maxbytes) ! { ! /* This shouldn't happen either, really. */ ! elog(ERROR, "not enough space to serialize GUC state"); } /* Shift the destptr ahead of the null terminator */ diff --git a/src/common/psprintf.c b/src/common/psprintf.c index b974a99..04465a1 100644 *** a/src/common/psprintf.c --- b/src/common/psprintf.c *************** psprintf(const char *fmt,...) *** 77,83 **** * pvsnprintf * * Attempt to format text data under the control of fmt (an sprintf-style ! * format string) and insert it into buf (which has length len, len > 0). * * If successful, return the number of bytes emitted, not counting the * trailing zero byte. This will always be strictly less than len. --- 77,83 ---- * pvsnprintf * * Attempt to format text data under the control of fmt (an sprintf-style ! * format string) and insert it into buf (which has length len). * * If successful, return the number of bytes emitted, not counting the * trailing zero byte. This will always be strictly less than len. *************** psprintf(const char *fmt,...) *** 89,102 **** * Other error cases do not return, but exit via elog(ERROR) or exit(). * Hence, this shouldn't be used inside libpq. * - * This function exists mainly to centralize our workarounds for - * non-C99-compliant vsnprintf implementations. Generally, any call that - * pays any attention to the return value should go through here rather - * than calling snprintf or vsnprintf directly. - * * Note that the semantics of the return value are not exactly C99's. * First, we don't promise that the estimated buffer size is exactly right; * callers must be prepared to loop multiple times to get the right size. * Second, we return the recommended buffer size, not one less than that; * this lets overflow concerns be handled here rather than in the callers. */ --- 89,99 ---- * Other error cases do not return, but exit via elog(ERROR) or exit(). * Hence, this shouldn't be used inside libpq. * * Note that the semantics of the return value are not exactly C99's. * First, we don't promise that the estimated buffer size is exactly right; * callers must be prepared to loop multiple times to get the right size. + * (Given a C99-compliant vsnprintf, that won't happen, but it is rumored + * that some implementations don't always return the same value ...) * Second, we return the recommended buffer size, not one less than that; * this lets overflow concerns be handled here rather than in the callers. */ *************** pvsnprintf(char *buf, size_t len, const *** 105,132 **** { int nprinted; - Assert(len > 0); - - errno = 0; - - /* - * Assert check here is to catch buggy vsnprintf that overruns the - * specified buffer length. Solaris 7 in 64-bit mode is an example of a - * platform with such a bug. - */ - #ifdef USE_ASSERT_CHECKING - buf[len - 1] = '\0'; - #endif - nprinted = vsnprintf(buf, len, fmt, args); ! Assert(buf[len - 1] == '\0'); ! ! /* ! * If vsnprintf reports an error other than ENOMEM, fail. The possible ! * causes of this are not user-facing errors, so elog should be enough. ! */ ! if (nprinted < 0 && errno != 0 && errno != ENOMEM) { #ifndef FRONTEND elog(ERROR, "vsnprintf failed: %m"); --- 102,111 ---- { int nprinted; nprinted = vsnprintf(buf, len, fmt, args); ! /* We assume failure means the fmt is bogus, hence hard failure is OK */ ! if (unlikely(nprinted < 0)) { #ifndef FRONTEND elog(ERROR, "vsnprintf failed: %m"); *************** pvsnprintf(char *buf, size_t len, const *** 136,177 **** #endif } ! /* ! * Note: some versions of vsnprintf return the number of chars actually ! * stored, not the total space needed as C99 specifies. And at least one ! * returns -1 on failure. Be conservative about believing whether the ! * print worked. ! */ ! if (nprinted >= 0 && (size_t) nprinted < len - 1) { /* Success. Note nprinted does not include trailing null. */ return (size_t) nprinted; } - if (nprinted >= 0 && (size_t) nprinted > len) - { - /* - * This appears to be a C99-compliant vsnprintf, so believe its - * estimate of the required space. (If it's wrong, the logic will - * still work, but we may loop multiple times.) Note that the space - * needed should be only nprinted+1 bytes, but we'd better allocate - * one more than that so that the test above will succeed next time. - * - * In the corner case where the required space just barely overflows, - * fall through so that we'll error out below (possibly after - * looping). - */ - if ((size_t) nprinted <= MaxAllocSize - 2) - return nprinted + 2; - } - /* ! * Buffer overrun, and we don't know how much space is needed. Estimate ! * twice the previous buffer size, but not more than MaxAllocSize; if we ! * are already at MaxAllocSize, choke. Note we use this palloc-oriented ! * overflow limit even when in frontend. */ ! if (len >= MaxAllocSize) { #ifndef FRONTEND ereport(ERROR, --- 115,135 ---- #endif } ! if ((size_t) nprinted < len) { /* Success. Note nprinted does not include trailing null. */ return (size_t) nprinted; } /* ! * We assume a C99-compliant vsnprintf, so believe its estimate of the ! * required space, and add one for the trailing null. (If it's wrong, the ! * logic will still work, but we may loop multiple times.) ! * ! * Choke if the required space would exceed MaxAllocSize. Note we use ! * this palloc-oriented overflow limit even when in frontend. */ ! if (unlikely((size_t) nprinted > MaxAllocSize - 1)) { #ifndef FRONTEND ereport(ERROR, *************** pvsnprintf(char *buf, size_t len, const *** 183,190 **** #endif } ! if (len >= MaxAllocSize / 2) ! return MaxAllocSize; ! ! return len * 2; } --- 141,145 ---- #endif } ! return nprinted + 1; } diff --git a/src/interfaces/libpq/pqexpbuffer.c b/src/interfaces/libpq/pqexpbuffer.c index 86b16e6..0814ec6 100644 *** a/src/interfaces/libpq/pqexpbuffer.c --- b/src/interfaces/libpq/pqexpbuffer.c *************** appendPQExpBufferVA(PQExpBuffer str, con *** 295,370 **** */ if (str->maxlen > str->len + 16) { ! /* ! * Note: we intentionally leave one byte unused, as a guard against ! * old broken versions of vsnprintf. ! */ ! avail = str->maxlen - str->len - 1; ! ! errno = 0; nprinted = vsnprintf(str->data + str->len, avail, fmt, args); /* ! * If vsnprintf reports an error other than ENOMEM, fail. */ ! if (nprinted < 0 && errno != 0 && errno != ENOMEM) { markPQExpBufferBroken(str); return true; } ! /* ! * Note: some versions of vsnprintf return the number of chars ! * actually stored, not the total space needed as C99 specifies. And ! * at least one returns -1 on failure. Be conservative about ! * believing whether the print worked. ! */ ! if (nprinted >= 0 && (size_t) nprinted < avail - 1) { /* Success. Note nprinted does not include trailing null. */ str->len += nprinted; return true; } ! if (nprinted >= 0 && (size_t) nprinted > avail) ! { ! /* ! * This appears to be a C99-compliant vsnprintf, so believe its ! * estimate of the required space. (If it's wrong, the logic will ! * still work, but we may loop multiple times.) Note that the ! * space needed should be only nprinted+1 bytes, but we'd better ! * allocate one more than that so that the test above will succeed ! * next time. ! * ! * In the corner case where the required space just barely ! * overflows, fail. ! */ ! if (nprinted > INT_MAX - 2) ! { ! markPQExpBufferBroken(str); ! return true; ! } ! needed = nprinted + 2; ! } ! else { ! /* ! * Buffer overrun, and we don't know how much space is needed. ! * Estimate twice the previous buffer size, but not more than ! * INT_MAX. ! */ ! if (avail >= INT_MAX / 2) ! needed = INT_MAX; ! else ! needed = avail * 2; } } else { /* * We have to guess at how much to enlarge, since we're skipping the ! * formatting work. */ needed = 32; } --- 295,344 ---- */ if (str->maxlen > str->len + 16) { ! avail = str->maxlen - str->len; nprinted = vsnprintf(str->data + str->len, avail, fmt, args); /* ! * If vsnprintf reports an error, fail (we assume this means there's ! * something wrong with the format string). */ ! if (unlikely(nprinted < 0)) { markPQExpBufferBroken(str); return true; } ! if ((size_t) nprinted < avail) { /* Success. Note nprinted does not include trailing null. */ str->len += nprinted; return true; } ! /* ! * We assume a C99-compliant vsnprintf, so believe its estimate of the ! * required space, and add one for the trailing null. (If it's wrong, ! * the logic will still work, but we may loop multiple times.) ! * ! * Choke if the required space would exceed INT_MAX, since str->maxlen ! * can't represent more than that. ! */ ! if (unlikely(nprinted > INT_MAX - 1)) { ! markPQExpBufferBroken(str); ! return true; } + needed = nprinted + 1; } else { /* * We have to guess at how much to enlarge, since we're skipping the ! * formatting work. Fortunately, because of enlargePQExpBuffer's ! * preference for power-of-2 sizes, this number isn't very sensitive; ! * the net effect is that we'll double the buffer size before trying ! * to run vsnprintf, which seems sensible. */ needed = 32; }
On Thu, Aug 16, 2018 at 11:06 AM, Andres Freund <andres@anarazel.de> wrote: > On 2018-08-16 10:54:01 +1200, Thomas Munro wrote: >> Really? I am not an MSVC user but I had the impression that their C >> mode (/TC or files named .c) was stuck on C89/C90 as a matter of >> policy, as Herb Sutter explained here (though maybe the situation has >> changed since then): > > They revised their position gradually, starting soon after. They claim > full C99 "language" (vs library, which is also pretty complete) > compliance now. I've not yet bothered to fully figure out which version > supports what however. Nor am I really sure about the whole flag thing, > it appears there's a gui element to choose, which we might need to mirror on > the xml level. Hah, I see. Thanks apparently due to FFmpeg for helping them change their minds. That seems like a bit of a catch-22 for projects that care about portability. (Maybe if we start writing C11 they'll change the compiler to keep up? Actually I already did that once, with an anonymous union that turned the build farm slightly red...) https://stackoverflow.com/questions/27826409/what-is-the-official-status-of-c99-support-in-vs2013 > ... > which I think is what we roughly would want. So it looks like msvc 2013 > might be the relevant requirement. FWIW cfbot is using Visual Studio 2010 right now. Appveyor provides 2008, 2010, 2012, 2013 (AKA 12.0), 2015, 2017, and to test with a different toolchain you can take the example patch from https://wiki.postgresql.org/wiki/Continuous_Integration and add a line like this to the end of the install section (worked for me; for 2015+ you probably also need to request a different image): - 'call "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" x86_amd64' I'll make that change to cfbot if we decree that it is the new baseline for PostgreSQL on Windows. Or I could do it sooner if anyone wants to be able to post test C99 patches in the Commitfest before we decide. -- Thomas Munro http://www.enterprisedb.com
There's also clang on Windows, which VS apparently supports. With clang on Windows PG could even make use of GCC/Clang C extensions :^)
Hi, On 2018-08-15 18:31:10 -0400, Tom Lane wrote: > Andres Freund <andres@anarazel.de> writes: > > I think we could get a start by adding that test to configure, without > > relying on it for now (i.e. keeping mylodon with -Wc99-extensions > > -Werror=c99-extensions alive). That'd tell us about which machines, > > besides presumably gaur, we'd need to either kick to the curb or change. > > Sure, no objection to putting that in just to see how much of the > buildfarm can handle it. If the answer turns out to be "a lot", > we might have to reconsider, but gathering data seems like the > first thing to do. I've pushed a minimal version adding the C99 test. If we were to actually go for this permanently, we'd likely want to clean up a bunch of other tests (say removing PGAC_C_VA_ARGS), but I don't see much point in doing that while just gathering evidence (to the contrary, it seems like it'd just muddy the water a bit). Greetings, Andres Freund
On 2018-08-16 01:41:34 -0700, Andres Freund wrote: > I've pushed a minimal version adding the C99 test. So, we get: * lotsa animals, unsurprisingly, showing C99 work without any flags. checking for ccache gcc option to accept ISO C99... none needed * rhinoceros, nudibranch, grouse, ...: https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=rhinoceros&dt=2018-08-16%2008%3A45%3A01&stg=configure https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=nudibranch&dt=2018-08-16%2009%3A16%3A46&stg=configure https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=grouse&dt=2018-08-16%2009%3A17%3A25&stg=configure checking for ccache gcc option to accept ISO C99... -std=gnu99 (or variations thereof) So, the autoconf magic is doing it's thing here. * dunlin (icc): https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=dunlin&dt=2018-08-16%2009%3A35%3A19&stg=configure checking for icc option to accept ISO C99... -std=gnu99 (later fails, but not newly so, and just because of ENOSPC) * anole (HP C compiler) https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2018-08-16 09%3A32%3A19 checking for cc option to accept ISO C99... none needed * dromedary: https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=dromedary&dt=2018-08-16%2008%3A37%3A28&stg=configure checking for ccache gcc option to accept ISO C99... unsupported I suspect that's because of the '-ansi' flag in CFLAGS, not because the compiler is incapable of actually supporting C99. Besides gaur, I'm also awaiting casteroides' results. The latter definitely does support C99, but I'm not sure autconf pushes hard enough. I think every other relevant animal has reported back. Greetings, Andres Freund
On 16/08/2018 13:18, Andres Freund wrote: > checking for ccache gcc option to accept ISO C99... unsupported > > I suspect that's because of the '-ansi' flag in CFLAGS, not because > the compiler is incapable of actually supporting C99. -ansi is equivalent to -std=c90. If we make the switch, the build farm configuration should just probably replace -ansi with -std=c99. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 16/08/2018 01:06, Andres Freund wrote: > So it looks like msvc 2013 might be the relevant requirement. According to my research (completely untested in practice), you need 2010 for mixed code and declarations and 2013 for named initialization of structs. I wonder what raising the msvc requirement would imply for supporting older Windows versions. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hi, On 2018-08-16 14:26:07 +0200, Peter Eisentraut wrote: > On 16/08/2018 13:18, Andres Freund wrote: > > checking for ccache gcc option to accept ISO C99... unsupported > > > > I suspect that's because of the '-ansi' flag in CFLAGS, not because > > the compiler is incapable of actually supporting C99. > > -ansi is equivalent to -std=c90. If we make the switch, the build farm > configuration should just probably replace -ansi with -std=c99. Right. I just hadn't checked the addition of -std=c99 doesn't override the -ansi. Presumably just an ordering thing. Greetings, Andres Freund
On 16/08/2018 14:30, Andres Freund wrote: > Hi, > > On 2018-08-16 14:26:07 +0200, Peter Eisentraut wrote: >> On 16/08/2018 13:18, Andres Freund wrote: >>> checking for ccache gcc option to accept ISO C99... unsupported >>> >>> I suspect that's because of the '-ansi' flag in CFLAGS, not because >>> the compiler is incapable of actually supporting C99. >> >> -ansi is equivalent to -std=c90. If we make the switch, the build farm >> configuration should just probably replace -ansi with -std=c99. > > Right. I just hadn't checked the addition of -std=c99 doesn't override > the -ansi. Presumably just an ordering thing. The mistake is putting -ansi or similar options into CFLAGS. You need to put them into CC. The Autoconf test for C99 correctly puts the candidate options into CC, but then if the test compilation invocation is something like "$CC $CFLAGS -c conftest.c", the -ansi cancels out the earlier -std=c99 or whatever. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hi, On 2018-08-16 14:28:25 +0200, Peter Eisentraut wrote: > On 16/08/2018 01:06, Andres Freund wrote: > > So it looks like msvc 2013 might be the relevant requirement. > > According to my research (completely untested in practice), you need > 2010 for mixed code and declarations and 2013 for named initialization > of structs. > > I wonder what raising the msvc requirement would imply for supporting > older Windows versions. One relevant tidbit is that afaict 2013 still allows *targeting* older versions of windows, down to XP and 2003, while requiring a newer platforms to run. See: https://docs.microsoft.com/en-us/visualstudio/productinfo/vs2013-compatibility-vs I don't know if that's hard to do, but I strongly suspect that the existing installers already do that (otherwise supporting newer versions would likely require separate builds). 2013 still runs on Windows 7, should you want that: https://docs.microsoft.com/en-us/visualstudio/productinfo/vs2013-sysrequirements-vs According to https://www.postgresql.org/download/windows/ the available binaries already effectively restrict windows support: EDB installers, for 10, restrict to: 64 Bit Windows: 2016, 2012 R2 & R1, 2008 R2, 7, 8, 10 32 Bit Windows: 2008 R1, 7, 8, 10 BIGSQL to: Windows 10 and Windows Server 2012. Of those 2013 only doesn't run on 2008 R1 anymore. Which still can be targeted from the newer windows versions. It'd be good to get confirmation that the windows binaries / installers are indeed built on newer platforms than the oldest supported version. Random observation: http://www.openscg.com/bigsql/postgresql/installers/ seems to indicate that packages aren't updated anymore. While it says "(09-Aug-18)" besides the major versions, it does not actually in fact have the last set of minor releases. I suspect that's related to openscg's acquisition by amazon? Either they need to catch up, or we need to take down the page and probably alert people about that fact. Greetings, Andres Freund
Hi, On 2018-08-16 04:18:59 -0700, Andres Freund wrote: > Besides gaur, I'm also awaiting casteroides' results. The latter > definitely does support C99, but I'm not sure autconf pushes hard > enough. I think every other relevant animal has reported back. Casteroides wasn't a problem, -D_STDC_C99= does the trick. But enabling C99 support triggered a somewhat weird failure on protosciurus (also solaris, but gcc) https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=protosciurus&dt=2018-08-16%2013%3A37%3A46 It detects C99 support successfully via -std=gnu99 but then fails somewhere during build with: gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute-Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -g -I/usr/local/include-m64 -I../../../../src/include -c -o gistutil.o gistutil.c In file included from gistutil.c:24: ../../../../src/include/utils/float.h: In function `get_float4_infinity': ../../../../src/include/utils/float.h:71: error: `__builtin_infinity' undeclared (first use in this function) ../../../../src/include/utils/float.h:71: error: (Each undeclared identifier is reported only once ../../../../src/include/utils/float.h:71: error: for each function it appears in.) ../../../../src/include/utils/float.h: In function `get_float8_infinity': ../../../../src/include/utils/float.h:91: error: `__builtin_infinity' undeclared (first use in this function) ../../../../src/include/utils/float.h: In function `get_float4_nan': ../../../../src/include/utils/float.h:108: error: pointer value used where a floating point value was expected ../../../../src/include/utils/float.h: In function `get_float8_nan': ../../../../src/include/utils/float.h:121: error: pointer value used where a floating point value was expected ../../../../src/include/utils/float.h: In function `check_float4_val': ../../../../src/include/utils/float.h:136: warning: implicit declaration of function `__builtin_isinf' ../../../../src/include/utils/float.h: In function `float4_eq': ../../../../src/include/utils/float.h:283: warning: implicit declaration of function `__builtin_isnan' While I'd personally have no problem kicking gcc 3.4 to the curb, I'm still confused what causes this error mode. Kinda looks like out-of-sync headers with gcc or something. Greetings, Andres Freund
Andres Freund <andres@anarazel.de> writes: > But enabling C99 support triggered a somewhat weird failure on > protosciurus (also solaris, but gcc) > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=protosciurus&dt=2018-08-16%2013%3A37%3A46 > It detects C99 support successfully via -std=gnu99 but then fails > somewhere during build with: > ../../../../src/include/utils/float.h:71: error: `__builtin_infinity' undeclared (first use in this function) > While I'd personally have no problem kicking gcc 3.4 to the curb, I'm > still confused what causes this error mode. Kinda looks like > out-of-sync headers with gcc or something. Yeah, this is *absolutely* unsurprising for a non-native gcc installation on an old platform. It only works to the extent that the platform's library headers are up to snuff. The gcc installation process actually knows enough to do a certain amount of editing of the platform's headers, and install "fixed" copies of those headers where gcc will find them before the ones in /usr/include. But it looks like the fixing didn't account for __builtin_infinity on this platform. Maybe a newer gcc would get it right. I just launched gaur/pademelon builds for you, though I'm quite certain they'll both report "unsupported". If we go through with this, my plan would be to retire pademelon (vendor's compiler) and see if I can install gcc 4.something to replace the 2.95.3 version that gaur is using. The -ansi option that dromedary is using is certainly losable, too (I'd probably replace it with either --std=c99 or --std=gnu99, whichever autoconf *doesn't* pick by default, just to be contrary). Andrew is going to need to give us a policy decision on whether to keep using the existing animal names or take out new ones for these rather-significantly-modified configurations. regards, tom lane
Hi, On 2018-08-16 11:41:30 -0400, Tom Lane wrote: > Andres Freund <andres@anarazel.de> writes: > > But enabling C99 support triggered a somewhat weird failure on > > protosciurus (also solaris, but gcc) > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=protosciurus&dt=2018-08-16%2013%3A37%3A46 > > > It detects C99 support successfully via -std=gnu99 but then fails > > somewhere during build with: > > ../../../../src/include/utils/float.h:71: error: `__builtin_infinity' undeclared (first use in this function) > > > While I'd personally have no problem kicking gcc 3.4 to the curb, I'm > > still confused what causes this error mode. Kinda looks like > > out-of-sync headers with gcc or something. > > Yeah, this is *absolutely* unsurprising for a non-native gcc installation > on an old platform. It only works to the extent that the platform's > library headers are up to snuff. The gcc installation process actually > knows enough to do a certain amount of editing of the platform's headers, > and install "fixed" copies of those headers where gcc will find them > before the ones in /usr/include. But it looks like the fixing didn't > account for __builtin_infinity on this platform. Maybe a newer gcc > would get it right. Sure, but that still requires the headers to behave differently between C89 and C99 mode, as this worked before. But it turns out there's two different math.h implementation headers, depending on c99 being enabled (math_c99.h being the troublesome). If I understand correctly the problem is more that the system library headers are *newer* (and assume a sun studio emulating/copying quite a bit of gcc) than the gcc that's being used, and therefore gcc fails. Gcc 3.4.3 has been released November 4, 2004, whereas solaris 10 is from January 31, 2005. The sun studio used for castoroides running on, I assume, the same hardware, is quite a bit newer in turn. > I just launched gaur/pademelon builds for you, though I'm quite certain > they'll both report "unsupported". Yea, that seems fairly likely. > If we go through with this, my plan would be to retire pademelon > (vendor's compiler) and see if I can install gcc 4.something to > replace the 2.95.3 version that gaur is using. K. > The -ansi option that dromedary is using is certainly losable, too > (I'd probably replace it with either --std=c99 or --std=gnu99, > whichever autoconf *doesn't* pick by default, just to be contrary). Makes sense. > Andrew is going to need to give us a policy decision on whether to > keep using the existing animal names or take out new ones for these > rather-significantly-modified configurations. CCed. Greetings, Andres Freund
On 08/16/2018 12:05 PM, Andres Freund wrote: > Hi, > > On 2018-08-16 11:41:30 -0400, Tom Lane wrote: >> Andres Freund <andres@anarazel.de> writes: >>> But enabling C99 support triggered a somewhat weird failure on >>> protosciurus (also solaris, but gcc) >>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=protosciurus&dt=2018-08-16%2013%3A37%3A46 >>> It detects C99 support successfully via -std=gnu99 but then fails >>> somewhere during build with: >>> ../../../../src/include/utils/float.h:71: error: `__builtin_infinity' undeclared (first use in this function) >>> While I'd personally have no problem kicking gcc 3.4 to the curb, I'm >>> still confused what causes this error mode. Kinda looks like >>> out-of-sync headers with gcc or something. >> Yeah, this is *absolutely* unsurprising for a non-native gcc installation >> on an old platform. It only works to the extent that the platform's >> library headers are up to snuff. The gcc installation process actually >> knows enough to do a certain amount of editing of the platform's headers, >> and install "fixed" copies of those headers where gcc will find them >> before the ones in /usr/include. But it looks like the fixing didn't >> account for __builtin_infinity on this platform. Maybe a newer gcc >> would get it right. > Sure, but that still requires the headers to behave differently between > C89 and C99 mode, as this worked before. But it turns out there's two > different math.h implementation headers, depending on c99 being enabled > (math_c99.h being the troublesome). If I understand correctly the > problem is more that the system library headers are *newer* (and assume > a sun studio emulating/copying quite a bit of gcc) than the gcc that's > being used, and therefore gcc fails. > > Gcc 3.4.3 has been released November 4, 2004, whereas solaris 10 is from > January 31, 2005. The sun studio used for castoroides running on, I > assume, the same hardware, is quite a bit newer in turn. > > >> I just launched gaur/pademelon builds for you, though I'm quite certain >> they'll both report "unsupported". > Yea, that seems fairly likely. > > >> If we go through with this, my plan would be to retire pademelon >> (vendor's compiler) and see if I can install gcc 4.something to >> replace the 2.95.3 version that gaur is using. > K. > > >> The -ansi option that dromedary is using is certainly losable, too >> (I'd probably replace it with either --std=c99 or --std=gnu99, >> whichever autoconf *doesn't* pick by default, just to be contrary). > Makes sense. > > >> Andrew is going to need to give us a policy decision on whether to >> keep using the existing animal names or take out new ones for these >> rather-significantly-modified configurations. > CCed. > The usual policy is that animal names need to change if the OS or compiler names change, but not if just the versions of those change. There is a script in the suite to update those settings, and the admins can also help if necessary. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 16/08/2018 15:00, Andres Freund wrote: >> According to my research (completely untested in practice), you need >> 2010 for mixed code and declarations and 2013 for named initialization >> of structs. >> >> I wonder what raising the msvc requirement would imply for supporting >> older Windows versions. > One relevant tidbit is that afaict 2013 still allows *targeting* older > versions of windows, down to XP and 2003, while requiring a newer > platforms to run. See: > https://docs.microsoft.com/en-us/visualstudio/productinfo/vs2013-compatibility-vs > I don't know if that's hard to do, but I strongly suspect that the > existing installers already do that (otherwise supporting newer versions > would likely require separate builds). So, does anyone with Windows build experience want to comment on this? The proposal is to desupport anything older than (probably) MSVC 2013, or alternatively anything that cannot compile the attached test file. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes: > So, does anyone with Windows build experience want to comment on this? > The proposal is to desupport anything older than (probably) MSVC 2013, > or alternatively anything that cannot compile the attached test file. We've got a buildfarm handy that could answer the question. Let's just stick a test function in there for a day and see which animals fail. regards, tom lane
Hi, On 2018-08-21 13:29:20 -0400, Tom Lane wrote: > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes: > > So, does anyone with Windows build experience want to comment on this? > > The proposal is to desupport anything older than (probably) MSVC 2013, > > or alternatively anything that cannot compile the attached test file. > > We've got a buildfarm handy that could answer the question. > Let's just stick a test function in there for a day and see > which animals fail. I think we pretty much know the answer already, anything before 2013 will fail. The question is more whether that's problematic for the people building on windows. My theory, quoted by Peter upthread, is that it shouldn't be problematic because 2013 can build binaries that run on XP etc. Greetings, Andres Freund
Andres Freund <andres@anarazel.de> writes: > On 2018-08-21 13:29:20 -0400, Tom Lane wrote: >> We've got a buildfarm handy that could answer the question. >> Let's just stick a test function in there for a day and see >> which animals fail. > I think we pretty much know the answer already, anything before 2013 > will fail. Do we know that for sure? I thought it was theoretical. regards, tom lane
Hi, On 2018-08-21 13:46:10 -0400, Tom Lane wrote: > Andres Freund <andres@anarazel.de> writes: > > On 2018-08-21 13:29:20 -0400, Tom Lane wrote: > >> We've got a buildfarm handy that could answer the question. > >> Let's just stick a test function in there for a day and see > >> which animals fail. > > > I think we pretty much know the answer already, anything before 2013 > > will fail. > > Do we know that for sure? I thought it was theoretical. Pretty much. I'm on mobile data so I don't want to search too much, but I've previously looked it up, and designated initializer support was introduced in 2013. See e.g. the graph in https://blogs.msdn.microsoft.com/somasegar/2013/06/28/c-conformance-roadmap/ Greetings, Andres Freund
On 08/21/2018 01:46 PM, Tom Lane wrote: > Andres Freund <andres@anarazel.de> writes: >> On 2018-08-21 13:29:20 -0400, Tom Lane wrote: >>> We've got a buildfarm handy that could answer the question. >>> Let's just stick a test function in there for a day and see >>> which animals fail. > >> I think we pretty much know the answer already, anything before 2013 >> will fail. > > Do we know that for sure? I thought it was theoretical. I thought I remembered a message where it had been looked up in docs, but I think the one I was remembering was Peter's "According to my research (completely untested in practice), you need 2010 for mixed code and declarations and 2013 for named initialization of structs." [1] which didn't quite actually say it was documented. -Chap [1] https://www.postgresql.org/message-id/ef986aa7-c7ca-ec34-19d9-fef38716b109%402ndquadrant.com
On 08/21/2018 01:31 PM, Andres Freund wrote: > Hi, > > On 2018-08-21 13:29:20 -0400, Tom Lane wrote: >> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes: >>> So, does anyone with Windows build experience want to comment on this? >>> The proposal is to desupport anything older than (probably) MSVC 2013, >>> or alternatively anything that cannot compile the attached test file. >> We've got a buildfarm handy that could answer the question. >> Let's just stick a test function in there for a day and see >> which animals fail. > I think we pretty much know the answer already, anything before 2013 > will fail. The question is more whether that's problematic for the > people building on windows. My theory, quoted by Peter upthread, is > that it shouldn't be problematic because 2013 can build binaries that > run on XP etc. > XP at least is essentially a dead platform for us. My animals are not able to build anything after release 10. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 08/21/2018 11:06 AM, Andrew Dunstan wrote: > > > > XP at least is essentially a dead platform for us. My animals are not > able to build anything after release 10. I wouldn't think XP should even be on our list anymore. Microsoft hasn't supported it in 4 years. JD -- Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc *** A fault and talent of mine is to tell it exactly how it is. *** PostgreSQL centered full stack support, consulting and development. Advocate: @amplifypostgres || Learn: https://postgresconf.org ***** Unless otherwise stated, opinions are my own. *****
On 2018-08-21 14:06:18 -0400, Andrew Dunstan wrote: > > > On 08/21/2018 01:31 PM, Andres Freund wrote: > > Hi, > > > > On 2018-08-21 13:29:20 -0400, Tom Lane wrote: > > > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes: > > > > So, does anyone with Windows build experience want to comment on this? > > > > The proposal is to desupport anything older than (probably) MSVC 2013, > > > > or alternatively anything that cannot compile the attached test file. > > > We've got a buildfarm handy that could answer the question. > > > Let's just stick a test function in there for a day and see > > > which animals fail. > > I think we pretty much know the answer already, anything before 2013 > > will fail. The question is more whether that's problematic for the > > people building on windows. My theory, quoted by Peter upthread, is > > that it shouldn't be problematic because 2013 can build binaries that > > run on XP etc. > XP at least is essentially a dead platform for us. My animals are not able > to build anything after release 10. I'm perfectly fine with that, FWIW. It's out of extended support for several years now. But my point was that you can newer versions of MSVC to build things that run on XP (and more relevantly 2008, vista, 7 etc). No idea if we'd need to change anything in our build infra for that. Greetings, Andres Freund
On 2018-08-21 11:09:15 -0700, Joshua D. Drake wrote: > On 08/21/2018 11:06 AM, Andrew Dunstan wrote: > > > > > > > > XP at least is essentially a dead platform for us. My animals are not > > able to build anything after release 10. > > I wouldn't think XP should even be on our list anymore. Microsoft hasn't > supported it in 4 years. XP isn't the only thing relevant here, vista and 2008 R1 are in the same class. Greetings, Andres Freund
On 08/21/2018 04:49 PM, Andres Freund wrote: > On 2018-08-21 11:09:15 -0700, Joshua D. Drake wrote: >> On 08/21/2018 11:06 AM, Andrew Dunstan wrote: >>> >>> >>> XP at least is essentially a dead platform for us. My animals are not >>> able to build anything after release 10. >> I wouldn't think XP should even be on our list anymore. Microsoft hasn't >> supported it in 4 years. > XP isn't the only thing relevant here, vista and 2008 R1 are in the same > class. > I do have a machine in my laptop graveyard with Vista. The only WS2008 instace I have available is R2 and AWS doesn't seem to have any AMIs for R1. Honestly, I don't think these matter terribly much. Anyone building now is not likely to be targeting them. Of the buildfarm animals, dory looks OK, hamerkop, bowerbird and thrips look not ok, and whelk and woodlouse look borderline. I'm perfectly prepared to upgrade bowerbird if necessary. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2018-08-21 17:58:00 -0400, Andrew Dunstan wrote: > > > On 08/21/2018 04:49 PM, Andres Freund wrote: > > On 2018-08-21 11:09:15 -0700, Joshua D. Drake wrote: > > > On 08/21/2018 11:06 AM, Andrew Dunstan wrote: > > > > > > > > > > > > XP at least is essentially a dead platform for us. My animals are not > > > > able to build anything after release 10. > > > I wouldn't think XP should even be on our list anymore. Microsoft hasn't > > > supported it in 4 years. > > XP isn't the only thing relevant here, vista and 2008 R1 are in the same > > class. > > > > > I do have a machine in my laptop graveyard with Vista. The only WS2008 > instace I have available is R2 and AWS doesn't seem to have any AMIs for R1. > > Honestly, I don't think these matter terribly much. Anyone building now is > not likely to be targeting them. I agree, I think we should just decree that the minimum is MSVC 2013 and that people building 12 need to deal with that. I would personally *additionally* would say that we officially don't support *running* (not compiling) on XP, 2003, 2008R1 and Vista (all unsupported by MS) - but that's a somewhat orthogonal decision. Greetings, Andres Freund
On Tue, Aug 21, 2018 at 11:16 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Andres Freund <andres@anarazel.de> writes: >> On 2018-08-21 13:29:20 -0400, Tom Lane wrote: >>> We've got a buildfarm handy that could answer the question. >>> Let's just stick a test function in there for a day and see >>> which animals fail. > >> I think we pretty much know the answer already, anything before 2013 >> will fail. > > Do we know that for sure? I thought it was theoretical. > I have MSVC 2010 on my Windows 7 VM where I have tried the C99 constructs provided by Peter E. and below are my findings: Tried compiling below function in one of the .c files in the src/backend ----------------------------------------------------------------------------------------------- int bar() { int x; x = 1; int y; y = 2; for (int i = 0; i < 5; i++) ; return 0; } error C2143: syntax error : missing ';' before 'type' error C2065: 'y' : undeclared identifier error C2143: syntax error : missing ';' before 'type' error C2143: syntax error : missing ';' before 'type' error C2143: syntax error : missing ')' before 'type' error C2143: syntax error : missing ';' before 'type' error C2065: 'i' : undeclared identifier warning C4552: '<' : operator has no effect; expected operator with side-effect error C2065: 'i' : undeclared identifier error C2059: syntax error : ')' Tried compiling below struct in one of the .c files in the src/backend ----------------------------------------------------------------------------------------------- struct foo { int a; int b; }; struct foo f = { .a = 1, .b = 2 }; error C2059: syntax error : '.' I have also tried compiling above in standalone console application project in MSVC 2010 and able to compile the function successfully, but struct is giving the same error as above. So, it seems to me that it won't work on <= MSVC 2010. I am personally okay with upgrading my Windows VM. I have found a couple of links which suggest that MSVC 2015 has a good amount of conformance with C99 [1]. It seems MSVC 2013 also has decent support for C99 [2], but I am not able to verify if the constructs shared by Peter E can be compiled on it. [1] https://social.msdn.microsoft.com/Forums/en-US/fa17bcdd-7165-4645-a676-ef3797b95918/details-on-c99-support-in-msvc?forum=vcgeneral [2] https://blogs.msdn.microsoft.com/vcblog/2013/07/19/c99-library-support-in-visual-studio-2013/ -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
Hi
--
On Thu, Aug 16, 2018 at 6:30 PM, Andres Freund <andres@anarazel.de> wrote:
Hi,
On 2018-08-16 14:28:25 +0200, Peter Eisentraut wrote:
> On 16/08/2018 01:06, Andres Freund wrote:
> > So it looks like msvc 2013 might be the relevant requirement.
>
> According to my research (completely untested in practice), you need
> 2010 for mixed code and declarations and 2013 for named initialization
> of structs.
>
> I wonder what raising the msvc requirement would imply for supporting
> older Windows versions.
One relevant tidbit is that afaict 2013 still allows *targeting* older
versions of windows, down to XP and 2003, while requiring a newer
platforms to run. See:
https://docs.microsoft.com/en-us/visualstudio/productinfo/ vs2013-compatibility-vs
I don't know if that's hard to do, but I strongly suspect that the
existing installers already do that (otherwise supporting newer versions
would likely require separate builds).
2013 still runs on Windows 7, should you want that:
https://docs.microsoft.com/en-us/visualstudio/productinfo/ vs2013-sysrequirements-vs
According to https://www.postgresql.org/download/windows/
the available binaries already effectively restrict windows support:
EDB installers, for 10, restrict to:
64 Bit Windows: 2016, 2012 R2 & R1, 2008 R2, 7, 8, 10
32 Bit Windows: 2008 R1, 7, 8, 10
BIGSQL to: Windows 10 and Windows Server 2012.
Of those 2013 only doesn't run on 2008 R1 anymore. Which still can be
targeted from the newer windows versions.
It'd be good to get confirmation that the windows binaries / installers
are indeed built on newer platforms than the oldest supported version.
We build windows binaries (>=9.3) on Windows 7 and Windows Server 2012 R2. For 9.3, the Visual Studio version is 2010 and for 9.4 and v10, we use 2013. For v11, we use 2017.
Random observation: http://www.openscg.com/bigsql/postgresql/installers/
seems to indicate that packages aren't updated anymore. While it says
"(09-Aug-18)" besides the major versions, it does not actually in fact
have the last set of minor releases. I suspect that's related to
openscg's acquisition by amazon? Either they need to catch up, or we
need to take down the page and probably alert people about that fact.
Greetings,
Andres Freund
Sandeep Thakkar
On Wed, Aug 22, 2018 at 4:59 AM, Andres Freund <andres@anarazel.de> wrote:
On 2018-08-21 17:58:00 -0400, Andrew Dunstan wrote:
>
>
> On 08/21/2018 04:49 PM, Andres Freund wrote:
> > On 2018-08-21 11:09:15 -0700, Joshua D. Drake wrote:
> > > On 08/21/2018 11:06 AM, Andrew Dunstan wrote:
> > > >
> > > >
> > > > XP at least is essentially a dead platform for us. My animals are not
> > > > able to build anything after release 10.
> > > I wouldn't think XP should even be on our list anymore. Microsoft hasn't
> > > supported it in 4 years.
> > XP isn't the only thing relevant here, vista and 2008 R1 are in the same
> > class.
> >
>
>
> I do have a machine in my laptop graveyard with Vista. The only WS2008
> instace I have available is R2 and AWS doesn't seem to have any AMIs for R1.
>
> Honestly, I don't think these matter terribly much. Anyone building now is
> not likely to be targeting them.
I agree, I think we should just decree that the minimum is MSVC 2013 and
that people building 12 need to deal with that. I would personally
*additionally* would say that we officially don't support *running* (not
compiling) on XP, 2003, 2008R1 and Vista (all unsupported by MS) - but
that's a somewhat orthogonal decision.
We build windows binaries (>=9.3) on Windows 7 and Windows Server 2012 R2. For 9.3, the Visual Studio version is 2010 and for 9.4 and v10, we use 2013. For v11, we use 2017.
Greetings,
Andres Freund
Sandeep Thakkar
Hi, On 2018-08-22 17:17:27 +0530, Sandeep Thakkar wrote: > > We build windows binaries (>=9.3) on Windows 7 and Windows Server 2012 R2. > For 9.3, the Visual Studio version is 2010 and for 9.4 and v10, we use > 2013. For v11, we use 2017. Sndeep: Thanks for the information. Did you ever encounter problems (at build or during runtime) with using those binaries on older platforms? Everyone: Given the fact that all the people building windows packages currently use a new enough stack by a fair margin, I think we should conclude that there's no obstacle on the windows side of things. If we agree on that, I'm going to propose a patch that includes: - relevant cleanups to configure - adapts sources.sgml to refer to C99 instead of C89 - add some trivial conversions to for(int i;;) and struct initializers, so the relevant old animals fail - adds a configure check to enable errors with vla usage (-Werror=vla) Questions: - do we want to make declarations at arbitrary points errors? It's already a warning currently. - other new restrictions that we want to introduce at the same time? Greetings, Andres Freund
On Wed, Aug 22, 2018 at 5:32 PM, Andres Freund <andres@anarazel.de> wrote:
Hi,
On 2018-08-22 17:17:27 +0530, Sandeep Thakkar wrote:
> > We build windows binaries (>=9.3) on Windows 7 and Windows Server 2012 R2.
> For 9.3, the Visual Studio version is 2010 and for 9.4 and v10, we use
> 2013. For v11, we use 2017.
Sndeep: Thanks for the information. Did you ever encounter problems (at
build or during runtime) with using those binaries on older platforms?
IIRC when the binaries were built with VC++ 2013 on 9.4, we had problems running them on XP and hence we had used "/p:PlatformToolset=v120_xp" option to msbuild during build time. From v10, we stopped using that toolset and instead used the default one i.e v120
Everyone: Given the fact that all the people building windows packages
currently use a new enough stack by a fair margin, I think we should
conclude that there's no obstacle on the windows side of things.
If we agree on that, I'm going to propose a patch that includes:
- relevant cleanups to configure
- adapts sources.sgml to refer to C99 instead of C89
- add some trivial conversions to for(int i;;) and struct initializers,
so the relevant old animals fail
- adds a configure check to enable errors with vla usage (-Werror=vla)
Questions:
- do we want to make declarations at arbitrary points errors? It's
already a warning currently.
- other new restrictions that we want to introduce at the same time?
Greetings,
Andres Freund
Sandeep Thakkar
On 22/08/2018 14:02, Andres Freund wrote: > If we agree on that, I'm going to propose a patch that includes: > - relevant cleanups to configure > - adapts sources.sgml to refer to C99 instead of C89 > - add some trivial conversions to for(int i;;) and struct initializers, > so the relevant old animals fail > - adds a configure check to enable errors with vla usage (-Werror=vla) sounds good > - do we want to make declarations at arbitrary points errors? It's > already a warning currently. While there are legitimate criticisms, it's a standard feature in C, C++, and many other languages, so I don't see what we'd gain by fighting it. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hi, On 2018-08-22 16:56:15 +0200, Peter Eisentraut wrote: > On 22/08/2018 14:02, Andres Freund wrote: > > - do we want to make declarations at arbitrary points errors? It's > > already a warning currently. > > While there are legitimate criticisms, it's a standard feature in C, > C++, and many other languages, so I don't see what we'd gain by fighting it. I personally don't really care - for C there's really not much of a difference (contrast to C++ with RAII). But Robert and Tom both said this would be an issue with moving to C99 for them. I don't want to hold up making progress here by fighting over this issue. Greetings, Andres Freund
On 8/22/18 10:56 AM, Peter Eisentraut wrote: > On 22/08/2018 14:02, Andres Freund wrote: >> If we agree on that, I'm going to propose a patch that includes: >> - relevant cleanups to configure >> - adapts sources.sgml to refer to C99 instead of C89 >> - add some trivial conversions to for(int i;;) and struct initializers, >> so the relevant old animals fail >> - adds a configure check to enable errors with vla usage (-Werror=vla) > > sounds good Sounds good to me. > >> - do we want to make declarations at arbitrary points errors? It's >> already a warning currently. > > While there are legitimate criticisms, it's a standard feature in C, > C++, and many other languages, so I don't see what we'd gain by fighting it. +1.= -- -David david@pgmasters.net
Hi, On 2018-08-22 05:02:11 -0700, Andres Freund wrote: > If we agree on that, I'm going to propose a patch that includes: > - relevant cleanups to configure > - adapts sources.sgml to refer to C99 instead of C89 > - add some trivial conversions to for(int i;;) and struct initializers, > so the relevant old animals fail > - adds a configure check to enable errors with vla usage (-Werror=vla) > > Questions: > > - do we want to make declarations at arbitrary points errors? It's > already a warning currently. > - other new restrictions that we want to introduce at the same time? Attached is a version doing so. Turns out ripping < msvc 2010 support allows us to get rid of a fair bit of code. Using appveyor I tested that I didn't bungle things too badly. But I don't really know what I'm doing in that area, Andrew or Craig, your input would be appreciated. I tried to update sources.sgml to reflect my understanding of the subset of C99 we're going to use. I'd rather start more restrictive and then argue about relaxing things separately. There's a few further potential cleanups due to relying on c99: - Use __func__ unconditionally, rather than having configure test for it - Use inline unconditionally, rather than having configure test for it - Remove tests for AC_TYPE_INTPTR_T, AC_TYPE_UINTPTR_T, AC_TYPE_LONG_LONG_INT, we can rely on them being present. - probably more in that vein I'd rather do these separately lateron, in case one of them causes trouble on some animals. There's some potential ones I think we should *not* do: - remove AC_C_FLEXIBLE_ARRAY_MEMBER, and rely on it unconditionally. I'm disinclined to do this, because C++ IIRC doesn't support it in any version, and I don't want to make Peter's life unnecessarily hard. - remove AC_C_RESTRICT check, rely on it unconditionally: MSVC still spells this differently. Greetings, Andres Freund
Attachment
Andres Freund <andres@anarazel.de> writes: > There's a few further potential cleanups due to relying on c99: > - Use __func__ unconditionally, rather than having configure test for it > - Use inline unconditionally, rather than having configure test for it > - Remove tests for AC_TYPE_INTPTR_T, AC_TYPE_UINTPTR_T, > AC_TYPE_LONG_LONG_INT, we can rely on them being present. > - probably more in that vein I wouldn't be in too much of a hurry to do that, particularly not the third item. You are confusing "compiler is c99" with "system headers are c99". Moreover, I don't see that we're buying much with such changes. regards, tom lane
Hi, On 2018-08-22 20:16:24 -0400, Tom Lane wrote: > Andres Freund <andres@anarazel.de> writes: > > There's a few further potential cleanups due to relying on c99: > > - Use __func__ unconditionally, rather than having configure test for it > > - Use inline unconditionally, rather than having configure test for it > > - Remove tests for AC_TYPE_INTPTR_T, AC_TYPE_UINTPTR_T, > > AC_TYPE_LONG_LONG_INT, we can rely on them being present. > > - probably more in that vein > > I wouldn't be in too much of a hurry to do that, particularly not the > third item. You are confusing "compiler is c99" with "system headers > are c99". Moreover, I don't see that we're buying much with such > changes. Yea, I am not in much of a hurry on any of them. I think the only argument for them is that it'd buy us a littlebit of a reduction in configure runtime... Greetings, Andres Freund
On 2018-08-22 17:09:05 -0700, Andres Freund wrote: > Attached is a version doing so. Mildly updated version attached. This adds an explanatory commit message, removes one stray docs C89 reference, and fixes a mis-squashing of a patch. Greetings, Andres Freund
Attachment
On 23/08/2018 03:31, Andres Freund wrote: > On 2018-08-22 17:09:05 -0700, Andres Freund wrote: >> Attached is a version doing so. > > Mildly updated version attached. This adds an explanatory commit > message, removes one stray docs C89 reference, and fixes a mis-squashing > of a patch. Let's do it! -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2018-08-23 22:02:26 +0200, Peter Eisentraut wrote: > On 23/08/2018 03:31, Andres Freund wrote: > > On 2018-08-22 17:09:05 -0700, Andres Freund wrote: > >> Attached is a version doing so. > > > > Mildly updated version attached. This adds an explanatory commit > > message, removes one stray docs C89 reference, and fixes a mis-squashing > > of a patch. > > Let's do it! Pushed the first two. I'll send the presumably affected buildfarm owners an email, asking them whether they want to update. Greetings, Andres Freund
On Fri, Aug 24, 2018 at 1:44 PM, Andres Freund <andres@anarazel.de> wrote: > On 2018-08-23 22:02:26 +0200, Peter Eisentraut wrote: >> On 23/08/2018 03:31, Andres Freund wrote: >> > On 2018-08-22 17:09:05 -0700, Andres Freund wrote: >> >> Attached is a version doing so. >> > >> > Mildly updated version attached. This adds an explanatory commit >> > message, removes one stray docs C89 reference, and fixes a mis-squashing >> > of a patch. >> >> Let's do it! > > Pushed the first two. I'll send the presumably affected buildfarm > owners an email, asking them whether they want to update. Note that designated initialisers are not in C++ yet (though IIUC they have been accepted in P0329R4[1] for the draft that will become C++20 whatever it turns out to be). [1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/n4691.html -- Thomas Munro http://www.enterprisedb.com
On Thu, Aug 16, 2018 at 12:24 PM, Thomas Munro <thomas.munro@enterprisedb.com> wrote: > FWIW cfbot is using Visual Studio 2010 right now. Appveyor provides > 2008, 2010, 2012, 2013 (AKA 12.0), 2015, 2017, and to test with a > different toolchain you can take the example patch from > https://wiki.postgresql.org/wiki/Continuous_Integration and add a line > like this to the end of the install section (worked for me; for 2015+ > you probably also need to request a different image): > > - 'call "C:\Program Files (x86)\Microsoft Visual Studio > 12.0\VC\vcvarsall.bat" x86_amd64' > > I'll make that change to cfbot if we decree that it is the new > baseline for PostgreSQL on Windows. Done. -- Thomas Munro http://www.enterprisedb.com
Hi, On 2018-08-23 18:44:34 -0700, Andres Freund wrote: > Pushed the first two. Seems to have worked like expected. > I'll send the presumably affected buildfarm owners an email, asking > them whether they want to update. Did that. Andrew, as expected my buildfarm animal mylodon, which uses compiler flags to enforce C89 compliance, failed due to this commit: https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=mylodon&br=HEAD I'd like to change it so it doesn't enforce C89 compliance across the board, but instead enforces the relevant standard. For that I'd need to change CFLAGS per-branch in the buildfarm. Is that possible already? Do I need two different config files? Greetings, Andres Freund
Andres Freund <andres@anarazel.de> writes: > I'd like to change it so it doesn't enforce C89 compliance across the > board, but instead enforces the relevant standard. For that I'd need to > change CFLAGS per-branch in the buildfarm. Is that possible already? Do > I need two different config files? I just did that on dromedary, with a stanza like this at the bottom: if ($branch eq 'HEAD' or $branch ge 'REL_12') { $conf{config_env}->{CC} = 'ccache gcc -std=c99'; } else { $conf{config_env}->{CC} = 'ccache gcc -ansi'; } regards, tom lane
On 2018-08-24 12:10:34 -0400, Tom Lane wrote: > Andres Freund <andres@anarazel.de> writes: > > I'd like to change it so it doesn't enforce C89 compliance across the > > board, but instead enforces the relevant standard. For that I'd need to > > change CFLAGS per-branch in the buildfarm. Is that possible already? Do > > I need two different config files? > > I just did that on dromedary, with a stanza like this at the bottom: > > if ($branch eq 'HEAD' or $branch ge 'REL_12') > { > $conf{config_env}->{CC} = 'ccache gcc -std=c99'; > } > else > { > $conf{config_env}->{CC} = 'ccache gcc -ansi'; > } Thanks, did something similar, mylodon should become green soon. I kinda was hoping that CFLAGS would directly accept version specific like some other vars directly... Greetings, Andres Freund
On 08/24/2018 11:46 AM, Andres Freund wrote: > Hi, > > On 2018-08-23 18:44:34 -0700, Andres Freund wrote: >> Pushed the first two. > Seems to have worked like expected. > >> I'll send the presumably affected buildfarm owners an email, asking >> them whether they want to update. > Did that. I have installed VS2017 on bowerbird and a test is currently running. It's got past the make phase so I assume everything is kosher. However, we only support VS2017 down to 9.6 and Vs2015 down to 9.5. Perhaps we should consider backpatching support for those down to 9.3. If not, I will just restrict bowerbird to building 9.6 and up. That's fine by me, we'll still have coverage from, say, currawong, but it's a pity we'll only be able to support the older compilers on the old branches. > > > Andrew, as expected my buildfarm animal mylodon, which uses compiler > flags to enforce C89 compliance, failed due to this commit: > https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=mylodon&br=HEAD > > I'd like to change it so it doesn't enforce C89 compliance across the > board, but instead enforces the relevant standard. For that I'd need to > change CFLAGS per-branch in the buildfarm. Is that possible already? Do > I need two different config files? > > I saw Tom's answer, and it will work as far as it goes. But maybe we should look at doing that in configure instead of putting the onus on all buildfarm owners? It already knows if it's using a GNU compiler, not sure how ubiquitous the -ansi and -std=c99 flags are. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hi, On 2018-08-24 14:09:09 -0400, Andrew Dunstan wrote: > I have installed VS2017 on bowerbird and a test is currently running. It's > got past the make phase so I assume everything is kosher. Cool, thanks. > However, we only support VS2017 down to 9.6 and Vs2015 down to 9.5. Perhaps > we should consider backpatching support for those down to 9.3. Hm, I have no strong objections to that. I don't think it's strictly necessary, given 2013 is supported across the board, but for the non MSVC world, we do fix compiler issues in older branches. There's not that much code for the newer versions afaict? Greetings, Andres Freund
Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes: > I saw Tom's answer, and it will work as far as it goes. But maybe we > should look at doing that in configure instead of putting the onus on > all buildfarm owners? It already knows if it's using a GNU compiler, not > sure how ubiquitous the -ansi and -std=c99 flags are. No, the only reason either of us are doing that is to force use of a flag that's different from what configure would select by default (which evidently is -std=gnu99 for gcc). Most buildfarm owners have no need to do anything. regards, tom lane
Andres Freund <andres@anarazel.de> writes: > On 2018-08-24 14:09:09 -0400, Andrew Dunstan wrote: >> However, we only support VS2017 down to 9.6 and Vs2015 down to 9.5. Perhaps >> we should consider backpatching support for those down to 9.3. > Hm, I have no strong objections to that. I don't think it's strictly > necessary, given 2013 is supported across the board, but for the non MSVC > world, we do fix compiler issues in older branches. There's not that > much code for the newer versions afaict? +1 for taking a look at how big a patch it would be. But I kind of thought we'd intentionally rejected back-patching some of those changes to begin with, so I'm not sure the end decision will change. regards, tom lane
On 08/24/2018 02:36 PM, Tom Lane wrote: > Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes: >> I saw Tom's answer, and it will work as far as it goes. But maybe we >> should look at doing that in configure instead of putting the onus on >> all buildfarm owners? It already knows if it's using a GNU compiler, not >> sure how ubiquitous the -ansi and -std=c99 flags are. > No, the only reason either of us are doing that is to force use of a > flag that's different from what configure would select by default > (which evidently is -std=gnu99 for gcc). Most buildfarm owners have > no need to do anything. > > Ah. Ok. Then your answer is good. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 08/24/2018 02:38 PM, Tom Lane wrote: > Andres Freund <andres@anarazel.de> writes: >> On 2018-08-24 14:09:09 -0400, Andrew Dunstan wrote: >>> However, we only support VS2017 down to 9.6 and Vs2015 down to 9.5. Perhaps >>> we should consider backpatching support for those down to 9.3. >> Hm, I have no strong objections to that. I don't think it's strictly >> necessary, given 2013 is supported across the board, but for the non MSVC >> world, we do fix compiler issues in older branches. There's not that >> much code for the newer versions afaict? > +1 for taking a look at how big a patch it would be. But I kind of > thought we'd intentionally rejected back-patching some of those changes > to begin with, so I'm not sure the end decision will change. The VS2017 patch applies cleanly to 9.5, so that seems easy. The VS2015 patch from 9.5 needs a very small amount of adjustment by the look of it for 9.3 and 9.4, after which I hope the VS2017 patch would again apply cleanly. I'll try to put this together. The trouble with not back patching support to all live branches as new versions come in is that it acts as a significant discouragement to buildfarm owners to use the latest Visual Studio versions. I've never argued stringly on this point before, but I think i'm goiung to ber inclined to in future. Meanwhile, I will turn bowerbird back on but just for >= 9.6 for now. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Andres Freund <andres@anarazel.de> writes: > On 2018-08-16 11:41:30 -0400, Tom Lane wrote: >> Andres Freund <andres@anarazel.de> writes: >>> While I'd personally have no problem kicking gcc 3.4 to the curb, I'm >>> still confused what causes this error mode. Kinda looks like >>> out-of-sync headers with gcc or something. >> Yeah, this is *absolutely* unsurprising for a non-native gcc installation >> on an old platform. > Sure, but that still requires the headers to behave differently between > C89 and C99 mode, as this worked before. But it turns out there's two > different math.h implementation headers, depending on c99 being enabled > (math_c99.h being the troublesome). If I understand correctly the > problem is more that the system library headers are *newer* (and assume > a sun studio emulating/copying quite a bit of gcc) than the gcc that's > being used, and therefore gcc fails. I have some more info on this issue, based on having successfully updated "gaur" using gcc 3.4.6 (which I picked because it was the last of the 3.x release series). It seems very unlikely that there's much difference between 3.4.3 and 3.4.6 as far as external features go. What I find in the 3.4.6 documentation is -- Built-in Function: double __builtin_inf (void) Similar to `__builtin_huge_val', except a warning is generated if the target floating-point format does not support infinities. This function is suitable for implementing the ISO C99 macro `INFINITY'. Note that the function is called "__builtin_inf", whereas what we see protosciurus choking on is "__builtin_infinity". So I don't think this is a version skew issue at all. I think that the system headers are written for the Solaris cc, and its name for the equivalent function is __builtin_infinity, whereas what gcc wants is __builtin_inf. Likewise, the failures we see for __builtin_isinf and __builtin_isnan are because Solaris cc provides those but gcc does not. If we wanted to keep protosciurus going without a compiler update, my thought would be to modify gcc's copy of math_c99.h to correct the function name underlying INFINITY, and change the definitions of isinf() and isnan() back to whatever was being used pre-C99. It's possible that newer gcc releases have been tweaked so that they make appropriate corrections in this header file automatically, but that's not a sure thing. regards, tom lane
On 08/24/2018 03:42 PM, Andrew Dunstan wrote: > > > On 08/24/2018 02:38 PM, Tom Lane wrote: >> Andres Freund <andres@anarazel.de> writes: >>> On 2018-08-24 14:09:09 -0400, Andrew Dunstan wrote: >>>> However, we only support VS2017 down to 9.6 and Vs2015 down to 9.5. >>>> Perhaps >>>> we should consider backpatching support for those down to 9.3. >>> Hm, I have no strong objections to that. I don't think it's strictly >>> necessary, given 2013 is supported across the board, but for the non >>> MSVC >>> world, we do fix compiler issues in older branches. There's not that >>> much code for the newer versions afaict? >> +1 for taking a look at how big a patch it would be. But I kind of >> thought we'd intentionally rejected back-patching some of those changes >> to begin with, so I'm not sure the end decision will change. > > The VS2017 patch applies cleanly to 9.5, so that seems easy. The > VS2015 patch from 9.5 needs a very small amount of adjustment by the > look of it for 9.3 and 9.4, after which I hope the VS2017 patch would > again apply cleanly. > > I'll try to put this together. > > The trouble with not back patching support to all live branches as new > versions come in is that it acts as a significant discouragement to > buildfarm owners to use the latest Visual Studio versions. I've never > argued stringly on this point before, but I think i'm goiung to ber > inclined to in future. > > Meanwhile, I will turn bowerbird back on but just for >= 9.6 for now. > I've pushed support for the latest MSVC compilers back to all live branches. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hi, On 2018-09-11 16:13:15 -0400, Andrew Dunstan wrote: > I've pushed support for the latest MSVC compilers back to all live branches. Thanks! Greetings, Andres Freund
Hi, On 2018-08-25 13:08:18 -0400, Tom Lane wrote: > Andres Freund <andres@anarazel.de> writes: > > On 2018-08-16 11:41:30 -0400, Tom Lane wrote: > >> Andres Freund <andres@anarazel.de> writes: > >>> While I'd personally have no problem kicking gcc 3.4 to the curb, I'm > >>> still confused what causes this error mode. Kinda looks like > >>> out-of-sync headers with gcc or something. > > >> Yeah, this is *absolutely* unsurprising for a non-native gcc installation > >> on an old platform. > > > Sure, but that still requires the headers to behave differently between > > C89 and C99 mode, as this worked before. But it turns out there's two > > different math.h implementation headers, depending on c99 being enabled > > (math_c99.h being the troublesome). If I understand correctly the > > problem is more that the system library headers are *newer* (and assume > > a sun studio emulating/copying quite a bit of gcc) than the gcc that's > > being used, and therefore gcc fails. > > I have some more info on this issue, based on having successfully > updated "gaur" using gcc 3.4.6 (which I picked because it was the last > of the 3.x release series). It seems very unlikely that there's much > difference between 3.4.3 and 3.4.6 as far as external features go. > What I find in the 3.4.6 documentation is > > -- Built-in Function: double __builtin_inf (void) > Similar to `__builtin_huge_val', except a warning is generated if > the target floating-point format does not support infinities. > This function is suitable for implementing the ISO C99 macro > `INFINITY'. > > Note that the function is called "__builtin_inf", whereas what we see > protosciurus choking on is "__builtin_infinity". So I don't think this > is a version skew issue at all. I think that the system headers are > written for the Solaris cc, and its name for the equivalent function is > __builtin_infinity, whereas what gcc wants is __builtin_inf. Likewise, > the failures we see for __builtin_isinf and __builtin_isnan are because > Solaris cc provides those but gcc does not. > > If we wanted to keep protosciurus going without a compiler update, my > thought would be to modify gcc's copy of math_c99.h to correct the > function name underlying INFINITY, and change the definitions of isinf() > and isnan() back to whatever was being used pre-C99. > > It's possible that newer gcc releases have been tweaked so that they > make appropriate corrections in this header file automatically, but > that's not a sure thing. I've pinged Dave about this, and he said: On 2018-09-26 17:04:29 -0400, Dave Page wrote: > Unfortunately, i think that whole machine is basically EOL now. It's > already skipping OpenSSL for some builds, as being stuck on a very old > version of the buildfarm client, as some of the modules used in newer > versions just won't compile or work. I don't have any support contract, or > access to newer versions of SunStudio, and the guys that used to package > GCC for Solaris now charge to download their packages. > > I could potentially build my own version of GCC, but I question whether > it's really worth it, given the other problems. He's now disabled building master on protosciurus and casteroides. We still have damselfly and rover_firefly so I don't feel too bad about that. I've pinged their owners to ask whether they could set up a sun studio (or however that's called in their solaris descendants) version. Greetings, Andres Freund