Thread: EAV or not to EAV?
Hi all. I have to take into account the update history of every single field into a number of tables. Those updates can come to the future (next values) but also to the past (history fix) and apply only to some fields, usually not to the entire record. Every lookup in those tables is always related to a timestamp that normally is the current timestamp, but can also be a past timestamp. I've come up with a "traditionally regretted" EAV design with the add of timestamps for the history. And the related problems, some of which have been solved by Joe Conways's crosstab contrib. Is there a better idea than mine? I hope so. -- Reg me Please <Non quietis maribus nauta>
On Thu, 22 Nov 2007, Reg Me Please wrote: > Hi all. > > I have to take into account the update history of every single > field into a number of tables. > Those updates can come to the future (next values) but also to the > past (history fix) and apply only to some fields, usually not to the > entire record. > Every lookup in those tables is always related to a timestamp that > normally is the current timestamp, but can also be a past timestamp. > > I've come up with a "traditionally regretted" EAV design with the add of > timestamps for the history. And the related problems, some of which have > been solved by Joe Conways's crosstab contrib. > > Is there a better idea than mine? I hope so. We use contrib/hstore for this Regards, Oleg _____________________________________________________________ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83
Hello all! I've a small bash script backup.sh for creating dumps on my Postgre db: #!/bin/bash time=`date '+%d'-'%m'-'%y'` cd /home/swkm/services/test pg_dump mydb > mydb_dump_$time.out I've edited crontab and added a line: 00 4 * * * swkm /home/swkm/services/test/backup.sh to execute the backup.sh as user swkm daily at 4 am. The user swkm is the user I use to create backups manually. The script itself is executed fine if run manually but run on cron scheduler I got an mydb_dump_$time.out file empty (of 0 kb) Do you have any idea about what's wrong? Thanks Sorin
On Nov 22, 2007 2:19 PM, Sorin N. Ciolofan <ciolofan@ics.forth.gr> wrote: > Hello all! > > I've a small bash script backup.sh for creating dumps on my Postgre db: > > #!/bin/bash > time=`date '+%d'-'%m'-'%y'` > cd /home/swkm/services/test > pg_dump mydb > mydb_dump_$time.out > > I've edited crontab and added a line: > > 00 4 * * * swkm /home/swkm/services/test/backup.sh > > to execute the backup.sh as user swkm daily at 4 am. > > The user swkm is the user I use to create backups manually. The script > itself is executed fine if run manually but run on cron scheduler I got an > mydb_dump_$time.out file empty (of 0 kb) > > Do you have any idea about what's wrong? > > Thanks > Sorin > Hi Sorin, why don't you add a "MAILTO=<youraddress>" at the start of your crontab file, so that you can receive a report of the problem? Regards Marco -- Marco Bizzarri http://iliveinpisa.blogspot.com/
Hi Marco! Thank you for the advice. I got: /home/swkm/services/test/backup.sh: line 4: pg_dump: command not found updating: mydb_dump_22-11-07.out (stored 0%) which seems strange -----Original Message----- From: pgsql-admin-owner@postgresql.org [mailto:pgsql-admin-owner@postgresql.org] On Behalf Of Marco Bizzarri Sent: Thursday, November 22, 2007 3:28 PM To: Sorin N. Ciolofan Cc: pgsql-admin@postgresql.org; pgsql-general@postgresql.org Subject: Re: [ADMIN] backup of postgres scheduled with cron On Nov 22, 2007 2:19 PM, Sorin N. Ciolofan <ciolofan@ics.forth.gr> wrote: > Hello all! > > I've a small bash script backup.sh for creating dumps on my Postgre db: > > #!/bin/bash > time=`date '+%d'-'%m'-'%y'` > cd /home/swkm/services/test > pg_dump mydb > mydb_dump_$time.out > > I've edited crontab and added a line: > > 00 4 * * * swkm /home/swkm/services/test/backup.sh > > to execute the backup.sh as user swkm daily at 4 am. > > The user swkm is the user I use to create backups manually. The script > itself is executed fine if run manually but run on cron scheduler I got an > mydb_dump_$time.out file empty (of 0 kb) > > Do you have any idea about what's wrong? > > Thanks > Sorin > Hi Sorin, why don't you add a "MAILTO=<youraddress>" at the start of your crontab file, so that you can receive a report of the problem? Regards Marco -- Marco Bizzarri http://iliveinpisa.blogspot.com/ ---------------------------(end of broadcast)--------------------------- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
On Thu, 22 Nov 2007, Sorin N. Ciolofan wrote: > Hello all! > > I've a small bash script backup.sh for creating dumps on my Postgre db: > > #!/bin/bash > time=`date '+%d'-'%m'-'%y'` > cd /home/swkm/services/test > pg_dump mydb > mydb_dump_$time.out > > I've edited crontab and added a line: > > 00 4 * * * swkm /home/swkm/services/test/backup.sh > > to execute the backup.sh as user swkm daily at 4 am. > > The user swkm is the user I use to create backups manually. The script > itself is executed fine if run manually but run on cron scheduler I got an > mydb_dump_$time.out file empty (of 0 kb) > > Do you have any idea about what's wrong? you have to pass the -U argument to pg_dump, otherwise it'll try to run the script as root jef > > Thanks > Sorin > > > > > > ---------------------------(end of broadcast)--------------------------- > TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not > match > >
On Nov 22, 2007 2:46 PM, Sorin N. Ciolofan <ciolofan@ics.forth.gr> wrote: > Hi Marco! > > Thank you for the advice. > > I got: > > /home/swkm/services/test/backup.sh: line 4: pg_dump: command not found > updating: mydb_dump_22-11-07.out (stored 0%) > > which seems strange > > Try putting the full path of the pg_dump command in the script. Regards Marco -- Marco Bizzarri http://iliveinpisa.blogspot.com/
On Thu, Nov 22, 2007 at 02:28:08PM +0100, Marco Bizzarri wrote: > > why don't you add a "MAILTO=<youraddress>" at the start of your > crontab file, so that you can receive a report of the problem? Note: check that your cron accepts such an addition. Many systems now use Vixie's cron, which does accept that, but some don't. It's a nice feature, and good for this purpose. A -- Andrew Sullivan Old sigs will return after re-constitution of blue smoke
On Nov 22, 2007 2:53 PM, Andrew Sullivan <ajs@crankycanuck.ca> wrote: > On Thu, Nov 22, 2007 at 02:28:08PM +0100, Marco Bizzarri wrote: > > > > why don't you add a "MAILTO=<youraddress>" at the start of your > > crontab file, so that you can receive a report of the problem? > > Note: check that your cron accepts such an addition. Many systems now use > Vixie's cron, which does accept that, but some don't. It's a nice feature, > and good for this purpose. Andrew, can you confirm the previous statement? I'm checking on a Debian Linux, at it seems to be a Vixie Cron, and that feature is described in the man page... > > A > > -- > Andrew Sullivan > Old sigs will return after re-constitution of blue smoke Regards Marco -- Marco Bizzarri http://iliveinpisa.blogspot.com/
Hello Sorin! Sorin N. Ciolofan wrote: > #!/bin/bash > time=`date '+%d'-'%m'-'%y'` > cd /home/swkm/services/test > pg_dump mydb > mydb_dump_$time.out You should output STDERR to some error logfile or set MAILTO in your crontab. I guess you then would have seen an error message saying that pg_dump was not found because cron doesn't load the users environment and therefore PATH variable isn't set. I suggest you call pg_dump in your script by absolute path. Greetings, Frank Wittig
Attachment
On Thu, Nov 22, 2007 at 02:59:33PM +0100, Marco Bizzarri wrote: > Andrew, can you confirm the previous statement? I'm checking on a Debian Linux, > at it seems to be a Vixie Cron, and that feature is described in the man page... If the feature's in your man page, then it works on your system :) I just wanted to warn you that this isn't an original feature of cron, so you have to check your system always to be sure you have it. A -- Andrew Sullivan Old sigs will return after re-constitution of blue smoke
Thank you all, Yes, I used the absolute path in my script and now works ok :-) Thank you again Sorin -----Original Message----- From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Frank Wittig Sent: Thursday, November 22, 2007 4:01 PM To: Sorin N. Ciolofan Cc: pgsql-admin@postgresql.org; pgsql-general@postgresql.org Subject: Re: [GENERAL] backup of postgres scheduled with cron Hello Sorin! Sorin N. Ciolofan wrote: > #!/bin/bash > time=`date '+%d'-'%m'-'%y'` > cd /home/swkm/services/test > pg_dump mydb > mydb_dump_$time.out You should output STDERR to some error logfile or set MAILTO in your crontab. I guess you then would have seen an error message saying that pg_dump was not found because cron doesn't load the users environment and therefore PATH variable isn't set. I suggest you call pg_dump in your script by absolute path. Greetings, Frank Wittig
On Thu, Nov 22, 2007 at 09:14:13AM -0500, Martin Gainty wrote: > > Good Morning AndrewI noticed the Vixie cron responds to SIGHUP signalsDo > you have any suggestions or tutorials on how Postgres would feed these > event signals to Vixie cron? Why would Postgres have to tell crond to restart? A -- Andrew Sullivan Old sigs will return after re-constitution of blue smoke
On Thu, 2007-11-22 at 09:48 +0100, Reg Me Please wrote: > I have to take into account the update history of every single > field into a number of tables. > Those updates can come to the future (next values) but also to the > past (history fix) and apply only to some fields, usually not to the > entire record. > Every lookup in those tables is always related to a timestamp that > normally is the current timestamp, but can also be a past timestamp. > > I've come up with a "traditionally regretted" EAV design with the add of > timestamps for the history. And the related problems, some of which have > been solved by Joe Conways's crosstab contrib. Here are two more ideas: 1) Vertically partition the tables so that each field you want to track is in its own table, and then track the history of those individual tables. Then join the tables on a key that doesn't change when you need to see the whole table. This works well if you only have a few non-key fields per table. 2) Store a normal history, but also store a bitmap of the fields that change in each record. You might use partial indexes to be able to quickly select only those records where a certain field has changed. EAV will give you many problems down the line, and I don't think it will solve anything for you. Regards, Jeff Davis