Thread: cvs version compile error
Hi, I'm trying to compile pgsql-7.4devl on Solaris8, but got the error below: ----------------------------------------8<--------------------------------------------------------------------- numeric.c: In function `PGTYPESnumeric_cmp': numeric.c:1308: `INT_MAX' undeclared (first use in this function) numeric.c:1308: (Each undeclared identifier is reported only once numeric.c:1308: for each function it appears in.) numeric.c:1310: warning: control reaches end of non-void function numeric.c: In function `PGTYPESnumeric_to_int': numeric.c:1452: `INT_MAX' undeclared (first use in this function) numeric.c: In function `PGTYPESnumeric_to_long': numeric.c:1474: `LONG_MAX' undeclared (first use in this function) make[4]: *** [numeric.o] Error 1 make[4]: Leaving directory `/export/home/postdb/pgsql-7.4/pgsql/src/interfaces/ecpg/pgtypeslib' make[3]: *** [all] Error 2 make[3]: Leaving directory `/export/home/postdb/pgsql-7.4/pgsql/src/interfaces/ecpg' make[2]: *** [all] Error 2 make[2]: Leaving directory `/export/home/postdb/pgsql-7.4/pgsql/src/interfaces' make[1]: *** [all] Error 2 make[1]: Leaving directory `/export/home/postdb/pgsql-7.4/pgsql/src' make: *** [all] Error 2 $ gcc --version 2.95.3 --------------------------------------8<------------------------------------------------------------------------------------- the source file are just updated from CVS. ISTM a little bug on Solaris8 platform. Thanks and Regards Laser
Weiping He <laser@zhengmai.com.cn> writes: > I'm trying to compile pgsql-7.4devl on Solaris8, but got the error > below: I think Bruce already fixed this. How old is your CVS pull? regards, tom lane
Tom Lane wrote: >I think Bruce already fixed this. How old is your CVS pull? > > regards, tom lane > >---------------------------(end of broadcast)--------------------------- >TIP 7: don't forget to increase your free space map settings > > > upgraded this morning, around 2003-07-04 09:29:00 CST or 2003-07-03 17:29:00 PST. and later I add a #include <limits.h> to src/interfaces/ecpg/pgtypeslib/numeric.c fix it temporary, don't know if it's correct, but make check all passed. Will try newer cvs tip later. Thank you laser
Weiping He <laser@zhengmai.com.cn> writes: > Tom Lane wrote: >> I think Bruce already fixed this. How old is your CVS pull? > upgraded this morning, around 2003-07-04 09:29:00 CST or 2003-07-03 > 17:29:00 PST. > and later I add a > #include <limits.h> Yeah, that is the correct fix, and Bruce did fix it on Wednesday. I just found out from Marc that he had to restore cvs.postgresql.org from a backup, and all CVS commits from Wednesday were lost. I have chastised him for not making that crystal-clear to all committers :-( I believe I can recover the missing updates from my own backup tapes, working on it now. regards, tom lane
We ran into problem while load-testing 7.3.2 server. From the database log: FATAL: cannot open /home/<some_path>/postgresql/PG_VERSION: File table overflow The QA engineer who ran the test claims that after server was restarted one record on the database was missing. We are not sure what exactly happened. He was running about 10 servers on HP-11, hitting them with AstraLoad. Most requests would try to update some record on the database, most run with Serializable Isolation Level. Apparently we managed to run out of the open file descriptors on the host machine. I wonder how Postgres handles this situation. (Or power outage, or any hard system fault, at this point) Is it possible that we really lost a record because of that? Should we consider changing default WAL_SYNC_METHOD? Thanks in advance, Michael.
Michael Brusser <michael@synchronicity.com> writes: > Apparently we managed to run out of the open file descriptors on the host > machine. This is pretty common if you set a large max_connections value while not doing anything to raise the kernel nfile limit. Postgres will follow what the kernel tells it is a safe number of open files per process, but far too many kernels lie through their teeth about what they can support :-( You can reduce max_files_per_process in postgresql.conf to keep Postgres from believing what the kernel says. I'd recommend making sure that max_connections * max_files_per_process is comfortably less than the kernel nfiles setting (don't forget the rest of the system wants to have some files open too ;-)) > I wonder how Postgres handles this situation. > (Or power outage, or any hard system fault, at this point) Theoretically we should be able to recover from this without loss of committed data (assuming you were running with fsync on). Is your QA person certain that the record in question had been written by a successfully-committed transaction? regards, tom lane
> > I wonder how Postgres handles this situation. > > (Or power outage, or any hard system fault, at this point) > > Theoretically we should be able to recover from this without loss of > committed data (assuming you were running with fsync on). Is your QA > person certain that the record in question had been written by a > successfully-committed transaction? > He's saying that his test script did not write any new records, only updated existing ones. My uneducated guess on how update may work: - create a clone record from the one to be updated and update some field(s) with given values. - write new record to the database and delete the original. If this is the case, could it be that somewhere along these lines postgres ran into problem and lost the record completely? But all this should be done in a transaction, so... I don't know... As for fsync, we currently go with whatever default value is, same for wal_sync_method. Does anyone has an estimate on performance penalty related to turning fsync on? Michael.