Thread: Re: [GENERAL] ld.so failed
Hello, > Did you add your pg libraries path into /etc/ld.so.conf > (/usr/local/postgres/libs is mine) and run ldconfig? I searched the whole tree from / for ld.so.conf but did not find one. C. -- Carsten Huettl - <http://www.ahorn-Net.de> pgp-key on request
On Fri, Oct 15, 1999 at 06:21:11AM +0100, Carsten Huettl wrote: > Hello, > > > Did you add your pg libraries path into /etc/ld.so.conf > > (/usr/local/postgres/libs is mine) and run ldconfig? > > I searched the whole tree from / for ld.so.conf but did not find one. Add the path in /etc/rc.conf to ldconfig_paths. Then reboot. -- Regards, Sascha Schumann Consultant
> > Did you add your pg libraries path into /etc/ld.so.conf > > (/usr/local/postgres/libs is mine) and run ldconfig? > > I searched the whole tree from / for ld.so.conf but did not find one. freebsd doesn't really use one. you can manually add the path with "ldconfig -R /path" you can make the change permanent by adding the path to the ldconfig parameter int /etc/rc.conf. alternately, you can include the ldconfig -R command in your /usr/local/etc/rc.d/pgsql.sh script, if you have one. -- [ Jim Mercer Reptilian Research jim@reptiles.org +1 416 410-5633 ] [ The telephone, for those of you who have forgotten, was a commonly used ] [ communications technology in the days before electronic mail. ] [ They're still easy to find in most large cities. -- Nathaniel Borenstein ]
From: jim@reptiles.org (Jim Mercer) Subject: Re: [GENERAL] ld.so failed To: CHUETTL@ahorn.sgh.uunet.de (Carsten Huettl) Date sent: Fri, 15 Oct 1999 09:59:09 -0400 (EDT) Copies to: hitesh@presys.com, pgsql-general@postgresql.org > you can make the change permanent by adding the path to the ldconfig > parameter int /etc/rc.conf. > How do I make this permanent if I need the "-aout" option with ldconfig ? C. -- Carsten Huettl - <http://www.ahorn-Net.de> pgp-key on request
Hi everyone, Should inserts be so slow? I've written a perl script to insert 10 million records for testing purposes and it looks like it's going to take a LONG time with postgres. MySQL is about 150 times faster! I don't have any indexes on either. I am using the DBI and relevant DBD for both. For Postgres 6.5.2 it's slow with either of the following table structures. create table central ( counter serial, number varchar (12), name text, address text ); create table central ( counter serial, number varchar (12), name varchar(80), address varchar(80)); For MySQL I used: create table central (counter int not null auto_increment primary key, number varchar(12), name varchar(80), address varchar(80)); The relevant perl portion is (same for both): $SQL=<<"EOT"; insert into central (number,name,address) values (?,?,?) EOT $cursor=$dbh->prepare($SQL); while ($c<10000000) { $number=$c; $name="John Doe the number ".$c; $address="$c, Jalan SS$c/$c, Petaling Jaya"; $rv=$cursor->execute($number,$name,$address) or die("Error executing insert!",$DBI::errstr); if ($rv==0) { die("Error inserting a record with database!",$DBI::errstr); }; $c++; $d++; if ($d>1000) { print "$c\n"; $d=1; } }
Try turning off Autocommit: MySQL doesn't support transactions, so that might be what's causing the speed boost. Just change the connect line from: $pg_con=DBI->connect("DBI:Pg:.... to $pg_con=DBI->connect("DBI:Pg(AutoCommit=>0):.... and add $pg_con->commit before you disconnect. I may have the syntax wrong, so double check the docs for the DBI and PG modules (perldoc DBD::Pg and perldoc DBI) At 01:25 AM 10/20/99, Lincoln Yeoh wrote: >Hi everyone, > >Should inserts be so slow? > >I've written a perl script to insert 10 million records for testing >purposes and it looks like it's going to take a LONG time with postgres. >MySQL is about 150 times faster! I don't have any indexes on either. I am >using the DBI and relevant DBD for both. > >For Postgres 6.5.2 it's slow with either of the following table structures. >create table central ( counter serial, number varchar (12), name text, >address text ); >create table central ( counter serial, number varchar (12), name >varchar(80), address varchar(80)); > >For MySQL I used: >create table central (counter int not null auto_increment primary key, >number varchar(12), name varchar(80), address varchar(80)); > >The relevant perl portion is (same for both): > $SQL=<<"EOT"; >insert into central (number,name,address) values (?,?,?) >EOT > $cursor=$dbh->prepare($SQL); > > while ($c<10000000) { > $number=$c; > $name="John Doe the number ".$c; > $address="$c, Jalan SS$c/$c, Petaling Jaya"; > $rv=$cursor->execute($number,$name,$address) or die("Error executing >insert!",$DBI::errstr); > if ($rv==0) { > die("Error inserting a record with database!",$DBI::errstr); > }; > $c++; > $d++; > if ($d>1000) { > print "$c\n"; > $d=1; > } > } > > > >************ >
Thanks. It's now a lot faster. Now only about 5 or so times slower. Cool. But it wasn't unexpected that I got the following after a while ;). NOTICE: BufferAlloc: cannot write block 990 for joblist/central NOTICE: BufferAlloc: cannot write block 991 for joblist/central DBD::Pg::st execute failed: NOTICE: BufferAlloc: cannot write block 991 for joblist/central Error executing insert!NOTICE: BufferAlloc: cannot write block 991 for joblist/central Database handle destroyed without explicit disconnect. I don't mind that. I was actually waiting to see what would happen and my jaw would have dropped if MVCC could handle Multi Versions with 10,000,000 records! But the trouble is postgres seemed to behave strangely after that error. The select count(*) from central took so long that I gave up. I tried drop table central, and so far it hasn't dropped yet. Single record selects still work tho. Well next time I'll commit after a few thousand inserts. But still things shouldn't lock up like that right? It's only inserted a few more thousand records to the 50000 to 60000 records stage, so it's not a big table I'm dealing with. I cancelled the drop, killed postmaster (nicely), restarted it and tried vacuuming. Vacuuming found some errors, but now it has got stuck too: NOTICE: Index central_counter_key: pointer to EmptyPage (blk 988 off 52) - fixing NOTICE: Index central_counter_key: pointer to EmptyPage (blk 988 off 53) - fixing Then nothing for the past 5 minutes. Looks like I may have to manually clean things up with good ol rm. <sigh>. Not an urgent problem since this shouldn't happen in production. By the way, the 999,999th record has been inserted into MySQL already. It's pretty good at the rather limited stuff it does. But Postgres' MVCC thing sounds real cool. Not as cool as a 10MegaRecord MVCC would be tho <grin>. Must try screwing up Oracle one of these days. I'm pretty good at messing things up ;). Cheerio, Link. At 02:56 AM 20-10-1999 -0300, Charles Tassell wrote: >Try turning off Autocommit: MySQL doesn't support transactions, so that >might be what's causing the speed boost. Just change the connect line from: >$pg_con=DBI->connect("DBI:Pg:.... >to >$pg_con=DBI->connect("DBI:Pg(AutoCommit=>0):.... > >and add > >$pg_con->commit > >before you disconnect. I may have the syntax wrong, so double check the >docs for the DBI and PG modules (perldoc DBD::Pg and perldoc DBI) > >At 01:25 AM 10/20/99, Lincoln Yeoh wrote: >>Hi everyone, >> >>Should inserts be so slow? >> >>I've written a perl script to insert 10 million records for testing >>purposes and it looks like it's going to take a LONG time with postgres. >>MySQL is about 150 times faster! I don't have any indexes on either. I am >>using the DBI and relevant DBD for both. >> >>For Postgres 6.5.2 it's slow with either of the following table structures. >>create table central ( counter serial, number varchar (12), name text, >>address text ); >>create table central ( counter serial, number varchar (12), name >>varchar(80), address varchar(80)); >> >>For MySQL I used: >>create table central (counter int not null auto_increment primary key, >>number varchar(12), name varchar(80), address varchar(80)); >> >>The relevant perl portion is (same for both): >> $SQL=<<"EOT"; >>insert into central (number,name,address) values (?,?,?) >>EOT >> $cursor=$dbh->prepare($SQL); >> >> while ($c<10000000) { >> $number=$c; >> $name="John Doe the number ".$c; >> $address="$c, Jalan SS$c/$c, Petaling Jaya"; >> $rv=$cursor->execute($number,$name,$address) or die("Error executing >>insert!",$DBI::errstr); >> if ($rv==0) { >> die("Error inserting a record with database!",$DBI::errstr); >> }; >> $c++; >> $d++; >> if ($d>1000) { >> print "$c\n"; >> $d=1; >> } >> } >> >> >> >>************ >> > > >
Lincoln Yeoh wrote: > > It's now a lot faster. Now only about 5 or so times slower. Cool. > > But it wasn't unexpected that I got the following after a while ;). > > NOTICE: BufferAlloc: cannot write block 990 for joblist/central > > NOTICE: BufferAlloc: cannot write block 991 for joblist/central > DBD::Pg::st execute failed: NOTICE: BufferAlloc: cannot write block 991 > for joblist/central > Error executing insert!NOTICE: BufferAlloc: cannot write block 991 for > joblist/central > Database handle destroyed without explicit disconnect. > > I don't mind that. I was actually waiting to see what would happen and > my jaw would have dropped if MVCC could handle Multi Versions with > 10,000,000 records! It doesn't seem as MVCC problem. MVCC uses transaction ids, not tuple ones, and so should work with any number of rows modified by concurrent transaction... In theory... -:)) Vadim
At 04:12 PM 20-10-1999 +0800, Vadim Mikheev wrote: >It doesn't seem as MVCC problem. MVCC uses transaction ids, >not tuple ones, and so should work with any number of rows >modified by concurrent transaction... In theory... -:)) OK. Dunno what I hit then. I wasn't modifying rows, I was inserting rows. How many rows (blocks) can I insert before I have to do a commit? Well anyway the Postgres inserts aren't so much slower if I only commit once in a while. Only about 3 times slower for the first 100,000 records. So the subject line is now inaccurate :). Not bad, I like it. But to fix the resulting problem I had to manually rm the files related to the table. I also dropped the database to make sure ;). That's not good. Cheerio, Link.
Lincoln Yeoh wrote: > > At 04:12 PM 20-10-1999 +0800, Vadim Mikheev wrote: > >It doesn't seem as MVCC problem. MVCC uses transaction ids, > >not tuple ones, and so should work with any number of rows > >modified by concurrent transaction... In theory... -:)) > > OK. Dunno what I hit then. I wasn't modifying rows, I was inserting rows. You hit buffer manager/disk manager problems or eat all disk space. As for "modifying" - I meant insertion, deletion, update... > How many rows (blocks) can I insert before I have to do a commit? Each transaction can have up to 2^32 commands. > Well anyway the Postgres inserts aren't so much slower if I only commit > once in a while. Only about 3 times slower for the first 100,000 records. > So the subject line is now inaccurate :). Not bad, I like it. Hope that it will be much faster when WAL will be implemented... Vadim
> NOTICE: BufferAlloc: cannot write block 990 for joblist/central Whenever I saw this error it was caused by the full filesystem in the data/base/ directory. --Gene
At 04:38 PM 20-10-1999 +0800, Vadim Mikheev wrote: >You hit buffer manager/disk manager problems or eat all disk space. >As for "modifying" - I meant insertion, deletion, update... There was enough disk space (almost another gig more). So it's probably some buffer manager problem. Is that the postgres buffer manager or is it a Linux one? Are you able to duplicate that problem? All I did was to turn off autocommit and start inserting. >> How many rows (blocks) can I insert before I have to do a commit? >Each transaction can have up to 2^32 commands. Wow, that's cool.. Should be enough for everyone. I can't imagine anybody making 4 billion statements without committing anything, not even politicians! >> Well anyway the Postgres inserts aren't so much slower if I only commit >> once in a while. Only about 3 times slower for the first 100,000 records. >> So the subject line is now inaccurate :). Not bad, I like it. > >Hope that it will be much faster when WAL will be implemented... What's WAL? Is postgres going to be faster than MySQL? That would be pretty impressive- transactions and all. Woohoo! Hope it doesn't stand for Whoops, All's Lost :). Cheerio, Link.
I've seen WAL mentioned several times, but have yet to see anything about what it is. Help! :) What is WAL? Or is it something that is only known by the Illuminati? :) I did a search in the archives and came up empty, no hits. Not even the messages which only mention it. Nothing, nada, zip, no "gee I'm banging my head against the wall trying..." or anything else. After having read some of the messages in the archives today I have a confession. I am very much pro Linux, *BSDs, et al, but I do most of my web browsing at work on a Win95 machine using Netscape. I know it's scary, but I am not trying to be. Please forgive me. If at all possible I will try to atone by installing RH 6.x on my machine at work, if I can do it where my boss can boot (from a shutdown machine) into windows without knowing Linux exists. :) Thanks, Jimmie Houchin Lincoln Yeoh wrote: > > At 04:38 PM 20-10-1999 +0800, Vadim Mikheev wrote: > >Hope that it will be much faster when WAL will be implemented... > > What's WAL? Is postgres going to be faster than MySQL? That would be pretty > impressive- transactions and all. Woohoo! > > Hope it doesn't stand for Whoops, All's Lost :). > > Cheerio, > > Link.
Lincoln Yeoh wrote: > > At 04:38 PM 20-10-1999 +0800, Vadim Mikheev wrote: > >You hit buffer manager/disk manager problems or eat all disk space. > >As for "modifying" - I meant insertion, deletion, update... > > There was enough disk space (almost another gig more). So it's probably > some buffer manager problem. Is that the postgres buffer manager or is it a > Linux one? > > Are you able to duplicate that problem? All I did was to turn off > autocommit and start inserting. I created table with text column and inserted 1000000 rows with '!' x rand(256) without problems on Sun Ultra, 6.5.2 I run postmaster only with -S flag. And while inserting I run select count(*) from _table_ in another session from time to time - wonder what was returned all the time before commit? -:)) > >> Well anyway the Postgres inserts aren't so much slower if I only commit > >> once in a while. Only about 3 times slower for the first 100,000 records. > >> So the subject line is now inaccurate :). Not bad, I like it. > > > >Hope that it will be much faster when WAL will be implemented... > > What's WAL? Is postgres going to be faster than MySQL? That would be pretty ^^^^^^^^^^^^^^^^^^^^^^^ No. > impressive- transactions and all. Woohoo! WAL is Write Ahead Log, transaction logging. This will reduce # of fsyncs (among other things) Postgres has to perform now. Test above took near 38 min without -F flag and 24 min with -F (no fsync at all). With WAL the same test without -F will be near as fast as with -F now. But what makes me unhappy right now is that with -F COPY FROM takes JUST 3 min !!! (And 16 min without -F) Isn't parsing/planning overhead toooo big ?! Vadim
Jimmie Houchin wrote: > What is WAL? Or is it something that is only known by the Illuminati? :) I understand your fears. I can also not follow all that the linux cracks around me are talking about. I also was still using a Windows workstation for quite some time, when we had already started our almost-linux-only startup business writing Perl apps for PG. And I still have a dual boot laptop allowing me to use Win98 and IE5.0 when I cannot get internet banking to work under linux or I want to watch a DVD movie or do some other multimedia stuff that I don't understand what it does and only want to enjoy. I'm not one of the guys who enjoy configuring linux for two days to get some device working. No it rather scares me. I enjoy to have working solutions though. (I also never looked closely at the motors inside of any of the cars I owned). But what I learned is, that still you have to do some learning. You do it involuntarily as a Windows user. Linux gives you the freedom to do it on your free choice - having great support from a lot of people who really know what they are doing. So - get down from envying the "Illuminati" - build up a working linux configuration - step by step - slowly. And ... if you are one of the less brighter guy's like me - don't ask for too much at one time. E.g. I still don't use an Office Suite under Linux. So I made a (very basic) installation of Samba and use an old laptop with Win95 and my Office97 Software on the Linux shares. No sweat. And no apologies necessary. There's nothing to be ashamed of to be a Windows user. Being a Linux user can sometimes make you a little proud, though. That's a difference. So lets just think that WAL means : use "Windows And Linux". Good Luck ! Chris -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Christian Rudow E-Mail: Christian.Rudow@thinx.ch ThinX networked business services Stahlrain 10, CH-5200 Brugg ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> WAL is Write Ahead Log, transaction logging. > This will reduce # of fsyncs (among other things) Postgres has > to perform now. > Test above took near 38 min without -F flag and 24 min > with -F (no fsync at all). > With WAL the same test without -F will be near as fast as with > -F now. > > But what makes me unhappy right now is that with -F COPY FROM takes > JUST 3 min !!! (And 16 min without -F) > Isn't parsing/planning overhead toooo big ?! Yikes. I always thought it would be nice to try and cache query plans by comparing parse trees, and if they match cached versions, replace any constants with new ones and use cached query plan. Hard to do right, though. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Bruce Momjian <maillist@candle.pha.pa.us> writes: >> But what makes me unhappy right now is that with -F COPY FROM takes >> JUST 3 min !!! (And 16 min without -F) >> Isn't parsing/planning overhead toooo big ?! > Yikes. I always thought it would be nice to try and cache query plans > by comparing parse trees, and if they match cached versions, replace any > constants with new ones and use cached query plan. Hard to do right, > though. But INSERT ... VALUES(...) has such a trivial plan that it's hardly likely to be worth caching. We probably ought to do some profiling to see where the time is going, and see if we can't speed things up for this simple case. In the meantime, the conventional wisdom is still that you should use COPY, if possible, for bulk data loading. (If you need default values inserted in some columns then this won't do...) regards, tom lane
Hello, Thanks for the reply. Actually I don't fear Linux, I embrace it. :) I am quite proficient with Windows but have never considered myself a Windows user. I don't own a Windows machine and currently have no intentions of it. I do however use it a work because my boss wanted a machine which could play DOOM. :( I own 4 Macs at home, one of which I have LinuxPPC installed. I will be purchasing an Athlon PC this winter for my own use. I will install RH 6.x on it. I have never really put too much thought on installing Linux on my PC at work. There are only 2 employees here and one computer. My boss is not computer proficient. He plays games, primarily solitaire. I run the computer and the business uses. Whatever I do Linux wise, I wanted it to be reasonably transparent to his usage, which is minimal. I'll install Mandrake Linux and put a games folder in the "Start" menu and put solitaire inside of it. He might not ever even no I'm no longer in Windows. :) However, when I'm not in the office and the computer is shutdown I wanted him to be able to start up the computer and boot straight into windows. Typing "win" at a prompt might lose him. :) I don't envy the Illuminati, only their understanding of what WAL is. :) I really wasn't apologizing for being a Windows user, only for using Windows. :) It is a totally involuntary action. I am ready for the day when there is enough educational and edutainment software to be available that I can use Linux for my children computers. We use computer extensively in their education. But for now it'll be the MacOS. Someday if I get permission from my wife I'll install Linux and either Sheepshaver or Mac-on-Linux. Thanks for the opportunity for fun off topic banter. Now back to the show. I'll keep an eye on the Illuminati and maybe one of them will slip and reveal the meaning of the Great WAL of PostgreSQL. :) Later, Jimmie Houchin Christian Rudow wrote: > > Jimmie Houchin wrote: > > > What is WAL? Or is it something that is only known by the Illuminati? :) > > I understand your fears. I can also not follow all that the > linux cracks around me are talking about. > > I also was still using a Windows workstation for quite some > time, when we had already started our almost-linux-only startup > business writing Perl apps for PG. > > And I still have a dual boot laptop allowing me to use Win98 > and IE5.0 when I cannot get internet banking to work under linux > or I want to watch a DVD movie or do some other multimedia stuff > that I don't understand what it does and only want to enjoy. > > I'm not one of the guys who enjoy configuring linux for two > days to get some device working. No it rather scares me. > I enjoy to have working solutions though. (I also never looked > closely at the motors inside of any of the cars I owned). > > But what I learned is, that still you have to do some learning. > You do it involuntarily as a Windows user. Linux gives you the > freedom to do it on your free choice - having great support > from a lot of people who really know what they are doing. > > So - get down from envying the "Illuminati" - build up a working > linux configuration - step by step - slowly. And ... if you are one of > the less brighter guy's like me - don't ask for too much at one > time. > > E.g. > I still don't use an Office Suite under Linux. So I made a (very > basic) installation of Samba and use an old laptop with Win95 and > my Office97 Software on the Linux shares. > No sweat. And no apologies necessary. > There's nothing to be ashamed of to be a Windows user. Being a > Linux user can sometimes make you a little proud, though. That's > a difference. > > So lets just think that WAL means : use "Windows And Linux". > > Good Luck ! > Chris > -- > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > Christian Rudow E-Mail: Christian.Rudow@thinx.ch > ThinX networked business services Stahlrain 10, CH-5200 Brugg > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > ************
> Thanks for the opportunity for fun off topic banter. Now back to the > show. I'll keep an eye on the Illuminati and maybe one of them will slip > and reveal the meaning of the Great WAL of PostgreSQL. :) > I pulled this from dejanews as the postgresql list archieve search is down. I have no idea who said it...... WAL is Write Ahead Log, transaction logging. This will reduce # of fsyncs (among other things) Postgres has to perform now. ->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-< James Thompson 138 Cardwell Hall Manhattan, Ks 66506 785-532-0561 Kansas State University Department of Mathematics ->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<
Hello, Thanks, Yes, the Illuminati has spoken. :) The quote is from Vadim in the "Re: [GENERAL] Postgres INSERTs much slower than MySQL?" thread. After I sent my message and continued reading the new messages in my box, I read the post to which you refer. Now the intrigue is over. I are educated. :) I was not aware that DejaNews had the postgresql mailing lists. I'll have to look into this. Thanks. Jimmie Houchin James Thompson wrote: > > > Thanks for the opportunity for fun off topic banter. Now back to the > > show. I'll keep an eye on the Illuminati and maybe one of them will slip > > and reveal the meaning of the Great WAL of PostgreSQL. :) > > > > I pulled this from dejanews as the postgresql list archieve search is > down. I have no idea who said it...... > > WAL is Write Ahead Log, transaction logging. This will reduce # of fsyncs > (among other things) Postgres has to perform now. > > ->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-< > James Thompson 138 Cardwell Hall Manhattan, Ks 66506 785-532-0561 > Kansas State University Department of Mathematics > ->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<
On Fri, 22 Oct 1999, Christian Rudow wrote: > So - get down from envying the "Illuminati" - build up a working > linux configuration - step by step - slowly. And ... if you are one of > the less brighter guy's like me - don't ask for too much at one > time. Actually, 3 out of 4 Illuminati use *BSD ... Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
That's why you don't hear anything about them anymore, they are stuck in the past... :) At 09:20 PM 10/22/99, The Hermit Hacker wrote: >On Fri, 22 Oct 1999, Christian Rudow wrote: > > > So - get down from envying the "Illuminati" - build up a working > > linux configuration - step by step - slowly. And ... if you are one of > > the less brighter guy's like me - don't ask for too much at one > > time. > >Actually, 3 out of 4 Illuminati use *BSD ... > >Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy >Systems Administrator @ hub.org >primary: scrappy@hub.org secondary: >scrappy@{freebsd|postgresql}.org > > >************
Whats BSD ?? Hows itcompare to linux ? Newbie you know At 09:20 PM 10/22/1999 -0300, The Hermit Hacker wrote: >On Fri, 22 Oct 1999, Christian Rudow wrote: > >> So - get down from envying the "Illuminati" - build up a working >> linux configuration - step by step - slowly. And ... if you are one of >> the less brighter guy's like me - don't ask for too much at one >> time. > >Actually, 3 out of 4 Illuminati use *BSD ... > >Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy >Systems Administrator @ hub.org >primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org > > >************ > >
> Whats BSD ?? Hows itcompare to linux ? > > Newbie you know This conversation develops at a remarkable speed; it's just the direction that apperas to be wrong. This site should be able to absorb it: http://www.unix-wizards.com/ Also, anyone having questions as to which of the systems is better, please kindly refer to the following FAQ, esp. items 0.2a, 0.2b and 0.2c: http://www.public.iastate.edu/~gendalia/FAQ/FAQ_00 --Gene
> WAL is Write Ahead Log, transaction logging. > This will reduce # of fsyncs (among other things) Postgres has > to perform now. > Test above took near 38 min without -F flag and 24 min > with -F (no fsync at all). > With WAL the same test without -F will be near as fast as with > -F now. This sounds impressive. So I did some testings with my pgbench to see how WAL improves the performance without -F using current. 100000 records insertation + vacuum took 1:02 with -F (4:10 without -F) TPC-B like transactions(mix of insert/update/select) per second: 21 (with -F) 3 (without -F) I couldn't see any improvement against 6.5.2 so far. Maybe some part of WAL is not yet committed to current? --- Tatsuo Ishii
Tatsuo Ishii wrote: > > > With WAL the same test without -F will be near as fast as with > > -F now. > > This sounds impressive. So I did some testings with my pgbench to see > how WAL improves the performance without -F using current. > > 100000 records insertation + vacuum took 1:02 with -F (4:10 without -F) > > TPC-B like transactions(mix of insert/update/select) per second: > 21 (with -F) > 3 (without -F) > > I couldn't see any improvement against 6.5.2 so far. Maybe some part > of WAL is not yet committed to current? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...is not implemented. Vadim
At 17:08 +0200 on 22/10/1999, Tom Lane wrote: > In the meantime, the conventional wisdom is still that you should use > COPY, if possible, for bulk data loading. (If you need default values > inserted in some columns then this won't do...) Yes it would - in two steps. COPY to a temp table that only has the non-default columns. Then INSERT ... SELECT ... from that temp table to your "real" table. Herouth -- Herouth Maoz, Internet developer. Open University of Israel - Telem project http://telem.openu.ac.il/~herutma