Thread: Invoice number
Hi My, my. It works. How many times have I tried this mailing list with no success :-( I'm wondering how people creates guaranteed sequential numbers - in my case for invoice numbers. - Sequences are not rollback'able. - It seems overkill to have a table just for this. - What else? tia
Hi, > I'm wondering how people creates guaranteed sequential numbers - in my case > for invoice numbers. See the PostgreSQL book p. 85 and 250, and the online doc about serials: CREATE TABLE person ( id SERIAL, name TEXT ); Volker Paul
On Thu, Dec 21, 2000 at 11:10:00AM +0100, Kaare Rasmussen wrote: > - Sequences are not rollback'able. Did you mean SERIAL instead of sequence here? If so, why is no rollbackable an issue? All you should need is unique numbers. Not necessarily exactly sequential numbers. mrc -- Mike Castle Life is like a clock: You can work constantly dalgoda@ix.netcom.com and be right all the time,or not work at all www.netcom.com/~dalgoda/ and be right at least twice a day. -- mrc We are all of us living in the shadow of Manhattan. -- Watchmen
Kaare, > How many times have I tried this > mailing list with no > success :-( Hey! It's a peer-to-peer mailing list. You want guarenteed answers, pay for support (I do) <shrug> > I'm wondering how people creates guaranteed sequential > numbers - in my case > for invoice numbers. > > - Sequences are not rollback'able. > - It seems overkill to have a table just for this. > - What else? Given a full set of business rules for invoice numbers, I could probably throw you a solution. Gods know that I've devised enough invoicing systems in the past. Please post more detail. -Josh Berkus
Mike Castle writes: > If so, why is no rollbackable an issue? All you should need is unique > numbers. Not necessarily exactly sequential numbers. Sometimes business rules require that every member of a sequence of such things as invoice numbers be accounted for. Speculation: Would it be possible to log SERIAL's as they are issued? It might be sufficient to just record the user id, though it would be more useful to log some indication of what the number was used for (or at least whether or not it was used at all). -- John Hasler john@dhh.gt.org (John Hasler) Dancing Horse Hill Elmwood, WI
Mike Castle wrote: >On Thu, Dec 21, 2000 at 11:10:00AM +0100, Kaare Rasmussen wrote: >> - Sequences are not rollback'able.> >Did you mean SERIAL instead of sequence here? > >If so, why is no rollbackable an issue? All you shouldneed is unique >numbers. Not necessarily exactly sequential numbers. For invoice numbers, it matters. Numbers missing from such a sequence are likely to provoke questions from auditors and taxmen; why borrow trouble? -- Oliver Elphick Oliver.Elphick@lfix.co.uk Isle of Wight http://www.lfix.co.uk/oliver PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47 GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C ======================================== "For a child will be born to us, a son will be given to us; And the governmentwill rest on His shoulders; And His name will be called Wonderful Counsellor, Mighty God, Eternal Father,Prince of Peace." Isaiah 9:6
On Thu, Dec 21, 2000 at 05:50:43PM +0000, Oliver Elphick wrote: > Mike Castle wrote: > >On Thu, Dec 21, 2000 at 11:10:00AM +0100, Kaare Rasmussen wrote: > >> - Sequences are not rollback'able. > > > >Did you mean SERIAL instead of sequence here? > > > >If so, why is no rollbackable an issue? All you should need is unique > >numbers. Not necessarily exactly sequential numbers. > > For invoice numbers, it matters. > > Numbers missing from such a sequence are likely to provoke questions from > auditors and taxmen; why borrow trouble? What do you do on the following scenario: Client 1 is placing an order, gets invoice #1. Client 2 is simultaneously placing an order, and gets invoice #2. Client 1 changes mind and cancels order. Invoice #1 is not used. Invoice #2 is. Client 3 comes along. Do they use invoice #1, out of order, or invoice #3? I suppose of the assignment of the invoice number is set up in such a way as it is the very last action performed just before the commit, you should be able to ensure that indeed the situation of having to deal with a rollback would never occur (outside of a system crash between assignment of invoice and commit, and that could be explained to auditors). [What happens in a paper world if a cup of coffee is spilt on some invoices, and these precious items are thrown in the trash?] mrc -- Mike Castle Life is like a clock: You can work constantly dalgoda@ix.netcom.com and be right all the time,or not work at all www.netcom.com/~dalgoda/ and be right at least twice a day. -- mrc We are all of us living in the shadow of Manhattan. -- Watchmen
> What do you do on the following scenario: I don't enter ! > Client 1 is placing an order, gets invoice #1. Wrong! He gets an order number > Client 1 changes mind and cancels order. Invoice #1 is not used. Invoice > #2 is. > Client 3 comes along. Do they use invoice #1, out of order, or invoice #3? If Invoice 1 is cancelled for some reason, it is still an invoice. The right thing is to produce a credit note to balance off the cancelled invoice. For smaller quantities, I believe it is OK just to file the cancelled invoice with a clear note that it's been cancelled. But I have a feeling my accountant doesn't agree :-) -- Kaare Rasmussen --Linux, spil,-- Tlf: 3816 2582 Kaki Data tshirts, merchandize Fax: 3816 2501 Howitzvej 75 �ben 14.00-18.00 Email: kar@webline.dk 2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk
Mike Castle writes: > Client 3 comes along. Do they use invoice #1, out of order, or invoice > #3? It shouldn't matter, as long as every number is accounted for. Seems to me that a trigger could make a log entry every time the serial is incremented. Workable? > What happens in a paper world if a cup of coffee is spilt on some > invoices, and these precious items are thrown in the trash? They are returned to accounting with an explanatory note, the numbers are logged as "voided", and the spoiled forms are shredded. -- John Hasler john@dhh.gt.org Dancing Horse Hill Elmwood, Wisconsin
Mike Castle wrote: > What do you do on the following scenario: > > Client 1 is placing an order, gets invoice #1. > Client 2 is simultaneously placing an order, and gets invoice #2. > > Client 1 changes mind and cancels order. Invoice #1 is not used. Invoice > #2 is. > > Client 3 comes along. Do they use invoice #1, out of order, or invoice #3? You must use invoice #3, because #1 was already used for another client and it remain printed on paper. I made an billing application (Tcl/Tk + PostgreSQL) and invoice numbers are generated by a sequence. It works with multiple billing location. When an operator generate an invoice, he work without a number ... the number is generated in the moment of saving and printing. In this mode, is posible to cancel a invoice before the number is generated and is imposible to duplicate numbers. When a numer is generated, operators can not cancel printing. In this case, one number represent always one printed invoice. If the invoice is canceled, it's "paper version" is canceled too. > I suppose of the assignment of the invoice number is set up in such a way > as it is the very last action performed just before the commit, you should > be able to ensure that indeed the situation of having to deal with a > rollback would never occur (outside of a system crash between assignment of > invoice and commit, and that could be explained to auditors). > I use "begin transaction" ... "commit" (abort) in time to write positions in invoice because I want to decrease (on increase) stocks on line. In this mode the other uses get the real stock (I do not use a "stocks" table ... stocks are calculated "on-line" ... sum(in)-sum(out) on that product for all prices who are active at that moment) for the products who are in curent invoice. When invoice is saved positions remain in database and is generated an invoice number or, if invoice is canceled, all this position are deleted, stocks returns to original values and no invoice number are generated. > > [What happens in a paper world if a cup of coffee is spilt on some > invoices, and these precious items are thrown in the trash?] > I cancel all this invoices but I never thrown them to trash because, in ROMANIA, printed invoces already have a printed number (with safety marks) and when taxman came to check he want to see all this numbers, valids or canceled. > > mrc > -- > Mike Castle Life is like a clock: You can work constantly > dalgoda@ix.netcom.com and be right all the time, or not work at all > www.netcom.com/~dalgoda/ and be right at least twice a day. -- mrc > We are all of us living in the shadow of Manhattan. -- Watchmen I hope this can help ... Sorry for my poor english. George Moga, Data Systems SRL Slobozia, ROMANIA
> I'm wondering how people creates guaranteed sequential numbers - in my case > for invoice numbers. > > - Sequences are not rollback'able. > - It seems overkill to have a table just for this. > - What else? You'll probably need a table (although you may be able to get away with only one for all of these you want to do). The big issue here is locking since a second transaction looking to get a number needs to wait for an earlier transaction that has already gotten a number to either commit or rollback to reuse the number if necessary.
Thus spake Oliver Elphick > >If so, why is no rollbackable an issue? All you should need is unique > >numbers. Not necessarily exactly sequential numbers. > > For invoice numbers, it matters. > > Numbers missing from such a sequence are likely to provoke questions from > auditors and taxmen; why borrow trouble? I solved this exact problem once before in another database (Progres) but it should work here too. What I did was select a maximum number of concurrent transactions (I picked 10) and created a table with two columns, a token and a number. For every sequence that I needed I seeded the table with 10 rows each with the name of the sequence so that my next routine could reference it and with the number column set from 1 to 10. When I need a new number I simply find the smallest number for that token, lock the row, use the number in my work and commit everything when I was done. In Postgres I was able to scan through each number in order stopping at the first unlocked one so the process didn't block. Not sure how to apply that here. Perhaps a third column that you plug in your process ID or something so that you do something like this. UPDATE numbers SET reserve = [my ID] WHERE seqname = [token] ANDseqnum = (SELECT MIN(seqnum) FROM numbers WHEREseqname = [token] AND reserve IS NULL); You would have to have some way of cleaning these up pretty quickly if your app crashed or failed to set reserve back to NULL. The other issue here is that strictly speaking you may not get your numbers sequentially but you won't skip any. In my experience the accountants have been OK with that. Hmmm. Perhaps "SELECT ... FOR UPDATE" would work instead of that extra column. Still need a way of skipping locked records though. Perhaps in a future version of PostgreSQL we can have a first class serial type that handles all of this. -- D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
Hello, I'm working on an application where I need to design for one table to grow to an extremely large size. I'm already planning to partition the data into multiple tables, and even possibly multiple servers, but even so each table may need to grow to the 10 - 15 million tuple range. This table will be used for a keyed lookup and it is very important that the query return in well under a second. I've done a small test using a dual ppro 200 server with 512 MB RAM, running RedHat Linux 6.1, and a late November 7.1 pre-beta snapshot of PostgreSQL (PGOPTS="-D/opt/pgsql/data -i -B 25000"). I used a test table with about 5 million tuples. Details: CREATE TABLE foo( guid varchar(20) not null, ks varchar(20) not null ); --> insert 5,000,000 rows, both guid and ks are 20 byte sha-1 hashes -- tried this first -- create index foo_idx1 on foo(guid); -- then tried create index foo_idx1 on foo using HASH (guid); SELECT ks FROM foo WHERE guid = 'f9ee1930f4010ba34cc5ca6efd27d988eb4f921d'; The query currently takes in excess of 40 seconds. I would appreciate any suggestions for optimizing to bring this down substantially. Thanks in advance, Joe Conway
What does explain show for your query? On Sat, 23 Dec 2000, Joe Conway wrote: > Hello, > > I'm working on an application where I need to design for one table to grow > to an extremely large size. I'm already planning to partition the data into > multiple tables, and even possibly multiple servers, but even so each table > may need to grow to the 10 - 15 million tuple range. This table will be used > for a keyed lookup and it is very important that the query return in well > under a second. I've done a small test using a dual ppro 200 server with 512 > MB RAM, running RedHat Linux 6.1, and a late November 7.1 pre-beta snapshot > of PostgreSQL (PGOPTS="-D/opt/pgsql/data -i -B 25000"). I used a test table > with about 5 million tuples. > > Details: > > CREATE TABLE foo( > guid varchar(20) not null, > ks varchar(20) not null > ); > > --> insert 5,000,000 rows, both guid and ks are 20 byte sha-1 hashes > -- tried this first > -- create index foo_idx1 on foo(guid); > -- then tried > create index foo_idx1 on foo using HASH (guid); > > SELECT ks FROM foo WHERE guid = 'f9ee1930f4010ba34cc5ca6efd27d988eb4f921d'; > > The query currently takes in excess of 40 seconds. I would appreciate any > suggestions for optimizing to bring this down substantially.
Subject: Re: [SQL] Optimization recommendations request > > What does explain show for your query? > I sent this a week ago using the wrong (i.e. not the one I signed up with) reply-to address, so it didn't make it to the list until just now. In the meantime I ran explain and noticed that the index wasn't being used. So I ran vacuum analyze and now I'm getting the expected performance (and explain shows the index being used). If I understand the logged statistics correctly, I'm getting results returned in anywhere from about 3 to 45 ms, depending on cache hit rate. I also repeated my test with 15 million records with similar results. Not bad at all! I am still interested in any generic optimization tips for very large tables. Thanks for taking the time to reply! Joe
"Joe Conway" <joe@conway-family.com> writes: > create index foo_idx1 on foo using HASH (guid); > SELECT ks FROM foo WHERE guid = 'f9ee1930f4010ba34cc5ca6efd27d988eb4f921d'; > The query currently takes in excess of 40 seconds. I would appreciate any > suggestions for optimizing to bring this down substantially. Why are you using a hash index? btree is a lot better for every conceivable purpose. It would also be a good idea to check to make sure that the query is using the index (see EXPLAIN). You didn't mention whether you'd done a VACUUM ANALYZE, so there's a risk the planner will make the wrong choice. regards, tom lane