Thread: Vacuum-full very slow
I'm in the process of archiving data on one of my PG machines. After backing up the data, I delete the old records and then run a "vacuum full" on each table. I'm vacuuming the first table now and it is taking much longer than I expected (I'm now past the 2-hour mark). Some info: Version: 8.1.2 On-disk table size: ~1.9GB Records deleted from the table: 10,290,892 (~60% of records) Physical memory: 2GB Connection maintenance_work_mem: 1GB Table indexes: 7 CPU: Intel(R) Pentium(R) 4 CPU 3.00GHz Disk: 2x200GB SATA as RAID-1 using 3-ware card The vacuum full is the only significant load on the server at the moment (PG or otherwise). IO is probably the bottleneck as CPU is running near 50% idle and 40% wait-state with PG using in the 5-15% range. So.... What amount of time might I expect to wait for this process to complete? Should I be changing settings other than maintenance_work_mem? What is the effect of the indexes and would dropping and recreating them help? Other suggestions? Cheers, Steve
Steve Crawford wrote: > I'm in the process of archiving data on one of my PG machines. After > backing up the data, I delete the old records and then run a "vacuum > full" on each table. > > I'm vacuuming the first table now and it is taking much longer than I > expected (I'm now past the 2-hour mark). Some info: > > Version: 8.1.2 > On-disk table size: ~1.9GB > Records deleted from the table: 10,290,892 (~60% of records) > Physical memory: 2GB > Connection maintenance_work_mem: 1GB > Table indexes: 7 > CPU: Intel(R) Pentium(R) 4 CPU 3.00GHz > Disk: 2x200GB SATA as RAID-1 using 3-ware card > > The vacuum full is the only significant load on the server at the moment > (PG or otherwise). IO is probably the bottleneck as CPU is running near > 50% idle and 40% wait-state with PG using in the 5-15% range. You could try CLUSTER instead of VACUUM FULL, as I think it should be faster. And the indexes will be devoid of any bloat, which will be a nice side effect. I wonder, though, if you set maintenance_work_mem too high and are causing the OS to swap? -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
> You could try CLUSTER instead of VACUUM FULL, as I think it should be > faster. And the indexes will be devoid of any bloat, which will be a > nice side effect. > > I wonder, though, if you set maintenance_work_mem too high and are > causing the OS to swap? > Hmmm, why would cluster be faster? No swapping - "top" shows swap mem of 3MB used and that wasn't changing. Just to be sure I ran "swapoff -a ; swapon -a" which brought it back to zero and it's not budging from there. Cheers, Steve
Alvaro Herrera <alvherre@commandprompt.com> writes: > I wonder, though, if you set maintenance_work_mem too high and are > causing the OS to swap? AFAIR, vacuum full pays no attention to maintenance_work_mem anyway. If the data it needs doesn't fit in memory, you lose ... regards, tom lane
Hi: I'm a new user of Postgresql (8.2.3), and I'm very happy with both the performance and operation of the system. My compliments to you the many authors who keep this database running and useful. My question is: I want to "freeze" a snapshot of the database every year (think of end of year tax records). However, I want this frozen version (and all the previous frozen versions) available to the database user as read-only. My thinking is to copy the entire public schema (which is where all the current data lives) into a new schema, named 2007 (2008, etc.) Is this a valid plan. I had thought of using a different database, but that would require multiple opens. I looked to see if there were an easy way to script doing an exact schema copy, but I haven't found anything like it in the docs. This is not heavy usage, nor is there a large amount of data (current pg_dump backups are around 30 Megabytes. Am I on the right track, or would you suggest a different strategy? -Owen Clipboard, Inc.
On Wed, Apr 25, 2007 at 09:36:35AM -0700, Steve Crawford wrote: > Hmmm, why would cluster be faster? Basically, vacuum full moves tuples from the end to the beginning of a table so it can compact the table. In the process it needs to update all the indexes too. So you save heap space but it tends to fragment your index. Lots of disk writes also. OTOH, cluster simply scans the table, sorts it, writes it out then rebuilds the indexes. If you've removed a lot of tuples, empirically it's faster. VACUUM FULL is discouraged these days, simply becuase it isn't actually as efficient as you might expect. Better to make sure it doesn't grow big in the first place, and use CLUSTER to rebuild the table if you really need to. Hope this helps, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to litigate.
Attachment
Owen Hartnett wrote: > I want to "freeze" a snapshot of the database every year (think of end > of year tax records). However, I want this frozen version (and all the > previous frozen versions) available to the database user as read-only. > My thinking is to copy the entire public schema (which is where all the > current data lives) into a new schema, named 2007 (2008, etc.) Sounds perfectly reasonable. You could either do it as a series of: CREATE TABLE archive2007.foo AS SELECT * FROM public.foo; or do a pg_dump of schema "public", tweak the file to change the schema names and restore it. -- Richard Huxton Archonet Ltd
Martijn van Oosterhout wrote: > On Wed, Apr 25, 2007 at 09:36:35AM -0700, Steve Crawford wrote: >> Hmmm, why would cluster be faster? > > Basically, vacuum full moves tuples from the end to the beginning of a > table so it can compact the table. In the process it needs to update > all the indexes too. So you save heap space but it tends to fragment > your index. Lots of disk writes also. > > OTOH, cluster simply scans the table, sorts it, writes it out then > rebuilds the indexes. If you've removed a lot of tuples, empirically > it's faster. > > VACUUM FULL is discouraged these days, simply becuase it isn't actually > as efficient as you might expect. Better to make sure it doesn't grow > big in the first place, and use CLUSTER to rebuild the table if you > really need to. > > Hope this helps, So my mental-model is utterly and completely wrong. My assumption was that since a full vacuum requires an access exclusive lock, it would do the intelligent and efficient thing which would be to first compact the table and then recreate the indexes. Am I reading that what it actually does is to thrash around keeping indexes unnecessarily updated, bloating them in the process? Will cluster reduce the on-disk size like vacuum does? ( And am I the only one who thinks the cluster command is backwards - after all it is the table that is being reordered based on an index so: CLUSTER tablename ON indexname seems way more intuitive than CLUSTER indexname ON tablename ) Cheers, Steve
Steve Crawford wrote: > So my mental-model is utterly and completely wrong. My assumption was > that since a full vacuum requires an access exclusive lock, it would do > the intelligent and efficient thing which would be to first compact the > table and then recreate the indexes. Right, it doesn't do the intelligent and efficient thing. There are differences though: VACUUM FULL does not need an extra copy of the table and indexes, while CLUSTER does. OTOH, VACUUM FULL also needs to WAL log every action, which makes it slower; CLUSTER only calls fsync when it's done, but since it keeps the original files around it doesn't need to involve WAL. > Am I reading that what it actually does is to thrash around keeping > indexes unnecessarily updated, bloating them in the process? Yes. > Will cluster reduce the on-disk size like vacuum does? Yes. And a bit more because indexes don't suffer. > And am I the only one who thinks the cluster command is backwards - > after all it is the table that is being reordered based on an index so: No, you're not, which is why a new syntax has been introduced for 8.3. -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
Alvaro Herrera <alvherre@commandprompt.com> writes: > Steve Crawford wrote: >> Am I reading that what it actually does is to thrash around keeping >> indexes unnecessarily updated, bloating them in the process? > Yes. Just for the record, it's not "unnecessary". The point of that is to not leave a corrupted table behind if VACUUM FULL fails midway through. The algorithm is: 1. copy tuples to lower blocks, inserting index entries for them too During this stage, if we fail then the copied tuples are invalid (since they were inserted by a failed transaction) and so no corruption. Meanwhile the original tuples are marked as "moved by this vacuum transaction", but their validity is not affected by that. 2. mark the transaction committed This atomically causes all the copied tuples to be GOOD and all the originals to be INVALID according to the tuple validity rules. 3. remove the index entries for moved-off tuples If we crash here, some of the invalid tuples will have index entries and some won't, but that doesn't matter because they're invalid. (The next vacuum will take care of finishing the cleanup.) 4. remove the moved-off tuples (which just requires truncating the table) I don't see a way to remove the old index entries before inserting new ones without creating a window where the index and table will be inconsistent if vacuum fails. CLUSTER avoids all this thrashing by recopying the whole table, but of course that has peak space requirements approximately twice the table size (and is probably not a win anyway unless most of the table rows need to be moved). You pays your money, you takes your choice. regards, tom lane
> I don't see a way to remove the old index entries before inserting new > ones without creating a window where the index and table will be > inconsistent if vacuum fails. VACUUM FULL is slow because it plays with the indexes... CLUSTER is slow because it has to order the rows... Maybe, drop all the indexes, VACUUM FULL only the table, then recreate all the indexes ? If vacuum fails, the index drop would be rolled back. By the way, about indexes : When you have a small table (say, for a website, maybe a few tens of megabytes max...) reindexing it takes just a few seconds, maybe 10-20 seconds. It could be interesting, performance-wise, to tell postgres not to bother about crash-survivability of indexes on this table. Like temporary tables. Write nothing to WAL. If it crashes, on recovery, postgres would reindex the table. btree indexing is so fast on postgres that I'd definitely use this feature. I'd rather trade a minute of recovery versus less disk IO for index update. You could even do that for whole tables (like, web sessions table) which hold "perishable" data... > CLUSTER avoids all this thrashing by recopying the whole table, but > of course that has peak space requirements approximately twice the > table size (and is probably not a win anyway unless most of the table > rows need to be moved). You pays your money, you takes your choice. > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 3: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faq
On Wed, 2007-04-25 at 12:47 -0400, Owen Hartnett wrote: > I want to "freeze" a snapshot of the database every year (think of > end of year tax records). However, I want this frozen version (and > all the previous frozen versions) available to the database user as > read-only. First, I'd rename the current-year schema to a more meaningful name (eg, taxes2006). Each year you could do a schema-only dump of the current year, tweak the schema name in the dump to reflect the new year, and restore just the schema into the same database. The benefit of this approach is that the data stay in place (ie, you don't dump public and restore into a new schema). Conceptually, something as simple as the following pipe might suffice to dump, rename, and restore into a new schema: $ pg_dump -s -n taxes2006 | sed -e 's/taxes2006/taxes2007/g' | psql -qa (This is a little dangerous because I've assumed that the string 'taxes2006' occurs only as a schema name. I've also assumed Unix/Linux and I have no idea what you'd do on a windows box.) PostgreSQL doesn't have a read-only mode per se. The closest you can get is to write a script to revoke insert/update/delete on all tables in an archived schema; that's also pretty easy: $ psql -Atc "select 'REVOKE INSERT,UPDATE,DELETE FROM '|| nspname||'.'|| relname||' FROM someuser;' from pg_class C join pg_namespace N on C.relnamespace=N.oid and N.nspname='taxes2006' WHERE C.relkind='r'" \ | psql -qa (In general, you should reduce everything to a one-liner.) One of the advantages of having archived schemas in a single database is that you'll be able to write queries that involve multiple years. You wouldn't be able to do that (easily*) if you archived the full database. -Reece * This is where David Fetter will mention dblink. -- Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0
Tom Lane wrote: > Alvaro Herrera <alvherre@commandprompt.com> writes: >> Steve Crawford wrote: >>> Am I reading that what it actually does is to thrash around keeping >>> indexes unnecessarily updated, bloating them in the process? > >> Yes. > > Just for the record, it's not "unnecessary". The point of that is to > not leave a corrupted table behind if VACUUM FULL fails midway through. > The algorithm is:... Yes, dig far enough under the covers and it all makes sense. Just curious would it be wise or even possible to create the functionality of an (oxymoronic) "vacuum full partial"? In other words, provide the ability to set a max-tuples or max-time parameter. Since you are looking for active tuples at the physical end of the file and moving them to unused space within the file, then lopping off the end it might be possible to do this in chunks to give control over how long a table is locked at any one time. Of course this doesn't improve the index-bloat issue. > CLUSTER avoids all this thrashing by recopying the whole table, but > of course that has peak space requirements approximately twice the > table size (and is probably not a win anyway unless most of the table > rows need to be moved). You pays your money, you takes your choice. That's certainly our case as we are archiving and purging 10s of millions of tuples from the prior year and have plenty of reserve disk-space. I killed the vacuum full after it passed the 5-hour mark. Cluster took 20 minutes with nice-compact indexes included. Thanks for the advice, everyone. Cheers, Steve
On Thu, 2007-04-26 at 00:13 +0200, Listmail wrote: > By the way, about indexes : > > When you have a small table (say, for a website, maybe a few > tens of > megabytes max...) reindexing it takes just a few seconds, maybe > 10-20 > seconds. > It could be interesting, performance-wise, to tell postgres > not to bother > about crash-survivability of indexes on this table. Like temporary > tables. > Write nothing to WAL. If it crashes, on recovery, postgres would > reindex > the table. > btree indexing is so fast on postgres that I'd definitely use > this > feature. > I'd rather trade a minute of recovery versus less disk IO for > index > update. > > You could even do that for whole tables (like, web sessions > table) which > hold "perishable" data... That optimisation on mine/Heikki's todo for the next release. In some cases it can speed up recovery, as well as mainline performance. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com
Owen Hartnett wrote: > > Hi: > > I'm a new user of Postgresql (8.2.3), and I'm very happy with both the > performance and operation of the system. My compliments to you the many > authors who keep this database running and useful. > > My question is: > > I want to "freeze" a snapshot of the database every year (think of end > of year tax records). However, I want this frozen version (and all the > previous frozen versions) available to the database user as read-only. > My thinking is to copy the entire public schema (which is where all the > current data lives) into a new schema, named 2007 (2008, etc.) > > Is this a valid plan. I had thought of using a different database, but > that would require multiple opens. I looked to see if there were an > easy way to script doing an exact schema copy, but I haven't found > anything like it in the docs. > > This is not heavy usage, nor is there a large amount of data (current > pg_dump backups are around 30 Megabytes. > > Am I on the right track, or would you suggest a different strategy? I get the impression that table partitioning with constraint exclusion would fit your purpose nicely. Effectively the data is split into separate tables with a check constraint on a specific year each, while the total dataset is still available through the common inherited base table. If possible (haven't used this myself yet) the big benefit is that your data ends up in the right table, even if you are a little late starting with your next years data. Regards, -- Alban Hertroys alban@magproductions.nl magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World //
Jonathan Vanasco wrote: > > On Apr 25, 2007, at 2:05 PM, Richard Huxton wrote: > >> Owen Hartnett wrote: >>> I want to "freeze" a snapshot of the database every year (think of >>> end of year tax records). However, I want this frozen version (and >>> all the previous frozen versions) available to the database user as >>> read-only. My thinking is to copy the entire public schema (which is >>> where all the current data lives) into a new schema, named 2007 >>> (2008, etc.) >> >> Sounds perfectly reasonable. You could either do it as a series of: >> CREATE TABLE archive2007.foo AS SELECT * FROM public.foo; >> or do a pg_dump of schema "public", tweak the file to change the >> schema names and restore it. > > the create table method won't copy the constraints + fkeys . Shouldn't matter for an archive though, since you'd not want anyone to have permissions. Still, pg_dump is my preference. Apart from anything else, you can keep a copy of the dump around too. -- Richard Huxton Archonet Ltd
At 9:23 AM +0100 4/26/07, Richard Huxton wrote: >Jonathan Vanasco wrote: >> >>On Apr 25, 2007, at 2:05 PM, Richard Huxton wrote: >> >>>Owen Hartnett wrote: >>>>I want to "freeze" a snapshot of the database every year (think >>>>of end of year tax records). However, I want this frozen version >>>>(and all the previous frozen versions) available to the database >>>>user as read-only. My thinking is to copy the entire public >>>>schema (which is where all the current data lives) into a new >>>>schema, named 2007 (2008, etc.) >>> >>>Sounds perfectly reasonable. You could either do it as a series of: >>> CREATE TABLE archive2007.foo AS SELECT * FROM public.foo; >>>or do a pg_dump of schema "public", tweak the file to change the >>>schema names and restore it. >> >>the create table method won't copy the constraints + fkeys . > >Shouldn't matter for an archive though, since you'd not want anyone >to have permissions. Still, pg_dump is my preference. Apart from >anything else, you can keep a copy of the dump around too. Thanks to everyone for all the replies. You've been most helpful. It looks like pg_dump is the way to go, though I'll have to think about it because I'm ultimately looking for a mechanical process that will automatically tweak the schema names. I don't want to have to visit clients every year to archive their data. Since the pg_dump file might change, my program may have to be version dependent. -Owen
On Apr 25, 2007, at 2:05 PM, Richard Huxton wrote: > Owen Hartnett wrote: >> I want to "freeze" a snapshot of the database every year (think of >> end of year tax records). However, I want this frozen version >> (and all the previous frozen versions) available to the database >> user as read-only. My thinking is to copy the entire public >> schema (which is where all the current data lives) into a new >> schema, named 2007 (2008, etc.) > > Sounds perfectly reasonable. You could either do it as a series of: > CREATE TABLE archive2007.foo AS SELECT * FROM public.foo; > or do a pg_dump of schema "public", tweak the file to change the > schema names and restore it. the create table method won't copy the constraints + fkeys . i think you're best off with a pgdump
On Thu, Apr 26, 2007 at 12:13:13AM +0200, Listmail wrote: > VACUUM FULL is slow because it plays with the indexes... > CLUSTER is slow because it has to order the rows... And: VACUUM FULL has to seek/read/write all over the disk to get it's job done. CLUSTER can scan through the table linearly a few times and write out the result. Now it's true that sorting large files involves overflowing to disk, but that path has been pretty well optimised. As the ratio between read time and seek time gets worse, I expect CLUSTER to keep pulling ahead. Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to litigate.
Attachment
Martijn van Oosterhout wrote: > On Thu, Apr 26, 2007 at 12:13:13AM +0200, Listmail wrote: > > VACUUM FULL is slow because it plays with the indexes... > > CLUSTER is slow because it has to order the rows... > > And: > VACUUM FULL has to seek/read/write all over the disk to get it's job > done. > CLUSTER can scan through the table linearly a few times and write out > the result. > Now it's true that sorting large files involves overflowing to disk, > but that path has been pretty well optimised. Hmm, no, CLUSTER doesn't scan the table linearly; it uses an indexscan, so it also needs seek/read/seek/read/write. -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
What about creating NOW empty schemas 'till 2038?
Your application will move automatically on the new empty schema on the new year without any changes to the db structure.
--
Angelo Rossi
Bluemetrix Ltd
Northpoint House
Northpoint Business Park
Mallow Road
Cork
Ireland
Ph: +353 021 4640107
Fax: +353 21 4309131
Web: www.bluemetrix.com
The content of this e-mail may be confidential or legally privileged. If you are not the named addressee or the intended recipient please do not copy it or forward it to anyone. If you have received this email in error please destroy it and kindly notify the sender. Email cannot be guaranteed to be secure or error-free, it is your responsibility to ensure that the message (including attachments) is safe and authorised for use in your environment. Bluemetrix Ltd, Registered in Ireland at Northpoint House, Northpoint Business Park, Mallow Road, Cork
Co Reg No.: 335879
Your application will move automatically on the new empty schema on the new year without any changes to the db structure.
On 4/26/07, Owen Hartnett <owen@clipboardinc.com> wrote:
At 9:23 AM +0100 4/26/07, Richard Huxton wrote:
>Jonathan Vanasco wrote:
>>
>>On Apr 25, 2007, at 2:05 PM, Richard Huxton wrote:
>>
>>>Owen Hartnett wrote:
>>>>I want to "freeze" a snapshot of the database every year (think
>>>>of end of year tax records). However, I want this frozen version
>>>>(and all the previous frozen versions) available to the database
>>>>user as read-only. My thinking is to copy the entire public
>>>>schema (which is where all the current data lives) into a new
>>>>schema, named 2007 (2008, etc.)
>>>
>>>Sounds perfectly reasonable. You could either do it as a series of:
>>> CREATE TABLE archive2007.foo AS SELECT * FROM public.foo;
>>>or do a pg_dump of schema "public", tweak the file to change the
>>>schema names and restore it.
>>
>>the create table method won't copy the constraints + fkeys .
>
>Shouldn't matter for an archive though, since you'd not want anyone
>to have permissions. Still, pg_dump is my preference. Apart from
>anything else, you can keep a copy of the dump around too.
Thanks to everyone for all the replies. You've been most helpful.
It looks like pg_dump is the way to go, though I'll have to think
about it because I'm ultimately looking for a mechanical process that
will automatically tweak the schema names. I don't want to have to
visit clients every year to archive their data. Since the pg_dump
file might change, my program may have to be version dependent.
-Owen
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
--
Angelo Rossi
Bluemetrix Ltd
Northpoint House
Northpoint Business Park
Mallow Road
Cork
Ireland
Ph: +353 021 4640107
Fax: +353 21 4309131
Web: www.bluemetrix.com
The content of this e-mail may be confidential or legally privileged. If you are not the named addressee or the intended recipient please do not copy it or forward it to anyone. If you have received this email in error please destroy it and kindly notify the sender. Email cannot be guaranteed to be secure or error-free, it is your responsibility to ensure that the message (including attachments) is safe and authorised for use in your environment. Bluemetrix Ltd, Registered in Ireland at Northpoint House, Northpoint Business Park, Mallow Road, Cork
Co Reg No.: 335879