Thread: Re: [ADMIN] Speed problem
On Tue, 3 Nov 1998 The Hermit Hacker wrote: > On Tue, 3 Nov 1998, Roger Books wrote: > > > I have a database that I need to pull the schema off of. When > > I run pg_dump -s the machine rolls over and dies. To the tune > > of I left it running over the weekend and still had 98% cpu > > usage by postgress monday. I managed to do a 'delete from statistics' > > and it accomplished this overnight, but when I then ran a vacuum to > > actually remove the garbage it still hadn't finished after 48 hours. > > Anyone have any suggestions? (Other than kicking myself for not > > writing them down as I created them). > > \d tablename > Tried that, I let that one run for 30 hours with no results. I wouldn't mind waiting a few days for the results, but 95+% CPU usage is kind of hard on the other (minor) purpose this machine is used for. Roger
Hi all Ah... \d tablename should return before you can get your finger off the enter key, as it (I would think) queries a small house keeping table that contains the database structure, NO? So \d tablename should be totally independent of the amount of data in that table. I have never had a table that big, but the times you are talking about sound totally absurd. I subjest that some thing is majorly botched and a full dump / full rebuild / restore may be in order here. Of course I am assumming here that the box this runs on is up to the task, massive amount of memory, massive massive amount of swap space, late module high speed CPU with lots of cache on a top end mother board ... etc. Note, with a table that big your swap space should be on a physically differant hard drive (from the db) so that the db and the swap are not always fighting for the use of the drive heads. You probably know all this, I just mention it "just in case", as I know nothing of how much you do or don't know about any thing. On Wed, 4 Nov 1998, Roger Books wrote: > On Tue, 3 Nov 1998 The Hermit Hacker wrote: > > On Tue, 3 Nov 1998, Roger Books wrote: > > > > > I have a database that I need to pull the schema off of. When > > > I run pg_dump -s the machine rolls over and dies. To the tune > > > of I left it running over the weekend and still had 98% cpu > > > usage by postgress monday. I managed to do a 'delete from statistics' > > > and it accomplished this overnight, but when I then ran a vacuum to > > > actually remove the garbage it still hadn't finished after 48 hours. > > > Anyone have any suggestions? (Other than kicking myself for not > > > writing them down as I created them). > > > > \d tablename > > > > Tried that, I let that one run for 30 hours with no results. I wouldn't > mind waiting a few days for the results, but 95+% CPU usage is kind > of hard on the other (minor) purpose this machine is used for. > > > Roger > Terry Mackintosh <terry@terrym.com> http://www.terrym.com sysadmin/owner Please! No MIME encoded or HTML mail, unless needed. Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3 ------------------------------------------------------------------- Success Is A Choice ... book by Rick Patino, get it, read it!
Terry Mackintosh wrote: > Note, with a table that big your swap space should be on a physically > differant hard drive (from the db) so that the db and the swap are not > always fighting for the use of the drive heads. > Terry, I have a rather large database as well (> 2 Meg of tuples). I thought my system was souped up enough: PII/400 MHz (100 MHz bus) 256 Meg SDRAM, 18 Gig SCSI harddrive, Red Hat Linux 5.1. However, my swap space (512 Meg) is on the same harddrive as the database (albeit on a separate partition). It sounds like you are saying that this is a no-no. The database runs quite fast except with processes involving repetitive inserts or updates. With each successive update in a continuous process, the speed drops (almost like an exponentially decreasing process). Plus, when this happens, I can't really use the computer that runs the database because it is soooooo slow. When I run top, the computer is using all 256 Meg of memory and going about 30-40 meg into swap space. From what you've suggested, this 30-40 meg of swap is also competing with the database trying to write to the harddrive (since they are using the same head). If I put in a second drive exclusively for the swap space, could this increase my speed? Or, would it be better to invest in more RAM so that the job wouldn't need to use any swap space at all? Thanks. -Tony
Hi Tony and all Well, first I want to say that none of my databases are bigger then a few meg of data, so I may not be the most experianced here. That said ... On Wed, 4 Nov 1998, Tony Reina wrote: > Terry Mackintosh wrote: > > > Note, with a table that big your swap space should be on a physically > > differant hard drive (from the db) so that the db and the swap are not > > always fighting for the use of the drive heads. > > > > Terry, > > I have a rather large database as well (> 2 Meg of tuples). I thought > my system was souped up enough: PII/400 MHz (100 MHz bus) 256 Meg SDRAM, > 18 Gig SCSI harddrive, Red Hat Linux 5.1. However, my swap space (512 > Meg) is on the same harddrive as the database (albeit on a separate > partition). It sounds like you are saying that this is a no-no. Just that under heavy loads it may degrade performance as you yourself mention. > The database runs quite fast except with processes involving repetitive > inserts or updates. With each successive update in a continuous process, > the speed drops (almost like an exponentially decreasing process). Plus, > when this happens, I can't really use the computer that runs the > database because it is soooooo slow. When I run top, the computer is > using all 256 Meg of memory and going about 30-40 meg into swap space. > >From what you've suggested, this 30-40 meg of swap is also competing > with the database trying to write to the harddrive (since they are using > the same head). This is the type of performance degradation I was referring to. > If I put in a second drive exclusively for the swap space, could this > increase my speed? Or, would it be better to invest in more RAM so that > the job wouldn't need to use any swap space at all? Why not both? :-) > Thanks. > -Tony > Terry Mackintosh <terry@terrym.com> http://www.terrym.com sysadmin/owner Please! No MIME encoded or HTML mail, unless needed. Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3 ------------------------------------------------------------------- Success Is A Choice ... book by Rick Patino, get it, read it!
On Wed, 4 Nov 1998, Terry Mackintosh wrote: > > I have a rather large database as well (> 2 Meg of tuples). I thought > > my system was souped up enough: PII/400 MHz (100 MHz bus) 256 Meg SDRAM, > > 18 Gig SCSI harddrive, Red Hat Linux 5.1. However, my swap space (512 > > Meg) is on the same harddrive as the database (albeit on a separate > > partition). It sounds like you are saying that this is a no-no. > > Just that under heavy loads it may degrade performance as you yourself > mention. > > > The database runs quite fast except with processes involving repetitive > > inserts or updates. With each successive update in a continuous process, > > the speed drops (almost like an exponentially decreasing process). Plus, > > when this happens, I can't really use the computer that runs the > > database because it is soooooo slow. When I run top, the computer is > > using all 256 Meg of memory and going about 30-40 meg into swap space. > > >From what you've suggested, this 30-40 meg of swap is also competing > > with the database trying to write to the harddrive (since they are using > > the same head). > > This is the type of performance degradation I was referring to. > > > If I put in a second drive exclusively for the swap space, could this > > increase my speed? Or, would it be better to invest in more RAM so that > > the job wouldn't need to use any swap space at all? > > Why not both? :-) Are you running with fsync() disabled? What version of PostgreSQL are you running? v6.4 has several memory leak fixes in it, which may or may not help...on long term connections, memory leak *may* be attributing to your problem. If you run top while doing the 'update/inserts', does the process size just continue to rise? Something else to try...close and reconnect your insert/update process(es). Not a long term solution, just curious if that shows an overall speed improvement. Similar to the 'memory leak' problem, at least this will let go of the process, clean out the memory, and start over again.... Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
> > I have a rather large database as well (> 2 Meg of tuples). I thought > > my system was souped up enough: PII/400 MHz (100 MHz bus) 256 Meg SDRAM, > > 18 Gig SCSI harddrive, Red Hat Linux 5.1. However, my swap space (512 > > Meg) is on the same harddrive as the database (albeit on a separate > > partition). It sounds like you are saying that this is a no-no. > > Hello, Small question, doesn't linux only use around 128 Megs tops for swap space? I thought I read this in a HOWTO somewhere. Regards Lincoln STMicroelectronics ---------------------- PS. Thanks everyone for your response to my previous question regarding images from data. I went for gnuplot 6.3 beta which supports a gif terminal using the GD library.
On Thu, 5 Nov 1998, Lincoln Spiteri wrote: > > > I have a rather large database as well (> 2 Meg of tuples). I thought > > > my system was souped up enough: PII/400 MHz (100 MHz bus) 256 Meg SDRAM, > > > 18 Gig SCSI harddrive, Red Hat Linux 5.1. However, my swap space (512 > > > Meg) is on the same harddrive as the database (albeit on a separate > > > partition). It sounds like you are saying that this is a no-no. > > > > > Hello, > > Small question, doesn't linux only use around 128 Megs tops for swap > space? I thought I read this in a HOWTO somewhere. If it does, I'll have to add that to my list of reasons why Linux is a bad operating system :) Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
On Thu, 5 Nov 1998, The Hermit Hacker wrote: > Date: Thu, 5 Nov 1998 09:03:57 -0400 (AST) > From: The Hermit Hacker <scrappy@hub.org> > To: Lincoln Spiteri <lincoln.spiteri@st.com> > Cc: "pgsql-admin@postgresql.org" <pgsql-admin@postgreSQL.org> > Subject: Re: seperate swap drive, was Re: [ADMIN] Speed problem > > On Thu, 5 Nov 1998, Lincoln Spiteri wrote: > > > > > I have a rather large database as well (> 2 Meg of tuples). I thought > > > > my system was souped up enough: PII/400 MHz (100 MHz bus) 256 Meg SDRAM, > > > > 18 Gig SCSI harddrive, Red Hat Linux 5.1. However, my swap space (512 > > > > Meg) is on the same harddrive as the database (albeit on a separate > > > > partition). It sounds like you are saying that this is a no-no. > > > > > > > > Hello, > > > > Small question, doesn't linux only use around 128 Megs tops for swap > > space? I thought I read this in a HOWTO somewhere. > You can use several swap partittions with different priorities Oleg > If it does, I'll have to add that to my list of reasons why Linux > is a bad operating system :) > > Marc G. Fournier > Systems Administrator @ hub.org > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org > > _____________________________________________________________ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83
Hi, > You can use several swap partittions with different priorities You can use 8 swap partitions => 1 GB de swap. You can use raid0 (stripping) to speed up your database. Imagine you have 5 HD's and your file is a little here, a little there permiting you to access data in parallel. You can use raid0 even for swap (the result partition must be <= 128MB). Linux Kernel supports raid in software no matter IDE/SCSI or size of each HD. []'s Mateus Cordeiro Inssa --------------------- Innova Producoes Digitais --------------------- Linux User: 76186 ICQ (Licq): 15243895 --------------------- mateus@innova.com.br mateus@einstein.innova.com.br Thu Nov 5 12:00:06 EDT 1998
Marc, I'm running PostgreSQL v. 6.3.2. No I don't want to disable fsync just in case someone stops a process midstream (which has happened before-- I figure it could screw up the db). So maybe it is those "memory leaks" that you've suggested. I'll try closing and re-connecting to see if there are any differences. However, I would be interested in downloading v. 6.4. Do you know where to get it? (I think it is a beta version but I couldn't find it on the www site). Thanks. -Tony Reina p.s. Does 6.4 have page-level locking? The Hermit Hacker wrote: > > Are you running with fsync() disabled? > > What version of PostgreSQL are you running? v6.4 has several > memory leak fixes in it, which may or may not help...on long term > connections, memory leak *may* be attributing to your problem. If you run > top while doing the 'update/inserts', does the process size just continue > to rise? > > Something else to try...close and reconnect your insert/update > process(es). Not a long term solution, just curious if that shows an > overall speed improvement. Similar to the 'memory leak' problem, at least > this will let go of the process, clean out the memory, and start over > again.... > > Marc G. Fournier > Systems Administrator @ hub.org > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
On Thu, 5 Nov 1998, G. Anthony Reina wrote: > Marc, > > > I'm running PostgreSQL v. 6.3.2. No I don't want to disable > fsync just in case someone stops a process midstream (which has > happened before-- I figure it could screw up the db). So maybe it is > those "memory leaks" that you've suggested. I'll try closing and > re-connecting to see if there are any differences. However, I would be > interested in downloading v. 6.4. Do you know where to get it? (I > think it is a beta version but I couldn't find it on the www site). First off, fsync() only helps if the system itself crashes...someone else might want to correct me, but if a process stops midstream, it should not affect the db in any way, shape or form... Second, v6.4 was released yesterday...should be available on all the FTP mirrors by now, as well as the main site :) > p.s. Does 6.4 have page-level locking? No, Vadim was waiting for v6.4 to be released before diving into it for v6.5 ... > > The Hermit Hacker wrote: > > > > > Are you running with fsync() disabled? > > > > What version of PostgreSQL are you running? v6.4 has several > > memory leak fixes in it, which may or may not help...on long term > > connections, memory leak *may* be attributing to your problem. If you run > > top while doing the 'update/inserts', does the process size just continue > > to rise? > > > > Something else to try...close and reconnect your insert/update > > process(es). Not a long term solution, just curious if that shows an > > overall speed improvement. Similar to the 'memory leak' problem, at least > > this will let go of the process, clean out the memory, and start over > > again.... > > > > Marc G. Fournier > > Systems Administrator @ hub.org > > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org > Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
> On Thu, 5 Nov 1998, G. Anthony Reina wrote: > > > Marc, > > > > > > I'm running PostgreSQL v. 6.3.2. No I don't want to disable > > fsync just in case someone stops a process midstream (which has > > happened before-- I figure it could screw up the db). So maybe it is > > those "memory leaks" that you've suggested. I'll try closing and > > re-connecting to see if there are any differences. However, I would be > > interested in downloading v. 6.4. Do you know where to get it? (I > > think it is a beta version but I couldn't find it on the www site). > > First off, fsync() only helps if the system itself > crashes...someone else might want to correct me, but if a process stops > midstream, it should not affect the db in any way, shape or form... > Yes, only OS crashes. Any other crash, fsync has not affect, because the buffers get written to the disk. Vadim should have that fixed by 6.5, so we wouln't need fsync anymore. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026