Thread: Money casting too liberal?
In contrast to certain other open-source databases, PostgreSQL leans toward protecting data from surprises and erroneous input, i.e. rejecting a date of 2013-02-31 instead of arbitrarily assigning a date of 2013-03-03. Similar "throw error" instead of "take a guess" philosophy applies to numeric and string operations as well. It's an approach I appreciate. But it appears that the philosophy does not extend to the "money" type. Although there are certain checks including no alpha, '$' and '-', if present, must be in the first two characters of the string and commas can't be at the end. Otherwise the casting is fairly liberal. Commas, for instance, can appear nearly anywhere including after the decimal point: select ',123,456,,7,8.1,0,9'::money; money ---------------- $12,345,678.11 Somewhat more worrisome is the fact that it automatically rounds input (away from zero) to fit. select '123.456789'::money; money --------- $123.46 select '$-123.456789'::money; money ---------- -$123.46 Thoughts? Is this the "no surprises" way that money input should behave? Cheers, Steve
On 27/3/13 at 9:12 AM, Steve Crawford wrote: >In contrast to certain other open-source databases, PostgreSQL leans >toward protecting data from surprises ... > And long may this continue. >But it appears that the philosophy does not extend to the "money" >type. ... > >select ',123,456,,7,8.1,0,9'::money; >money >---------------- >$12,345,678.11 > In general terms I would hate for such probable garbage to appear as "legitimate" data in the dB. >Somewhat more worrisome is the fact that it automatically rounds input (away from zero) to fit. > >select '123.456789'::money; >money >--------- >$123.46 > >select '$-123.456789'::money; >money >---------- >-$123.46 > >Thoughts? Is this the "no surprises" way that money input should behave? > I would defer to a CPA on the correct conventions for rounding. However I have a vague notion there are circumstances when rounding is always up, always down and (only sometimes) to the nearest. If the money type is meant to be serious then these conventions need to be followed/settable on a column by column basis. And money is done in whole dollars, thousands of dollars, and fractional cents according to the situation, i.e., not just two decimal places... another setting. Personally I have ignored the money type in favour of numeric. Money seemed to do too much behind the scenes for my taste, but, that's me being lazy as well, I haven't spend much time trying to understand its features. Regards Gavan Schneider
On Thu, 28 Mar 2013 23:43:23 +1100 Gavan Schneider <pg-gts@snkmail.com> wrote: > >But it appears that the philosophy does not extend to the "money" > >type. ... As the original author of the money type I guess I should weigh in. > >select ',123,456,,7,8.1,0,9'::money; > >money > >---------------- > >$12,345,678.11 It certainly doesn't accept that by design. I just never thought about such input. If you put garbage in anything can happen including acceptance. If this is an issue I guess we need to look for such things and reject it. Just a SMOP. > I would defer to a CPA on the correct conventions for rounding. > However I have a vague notion there are circumstances when > rounding is always up, always down and (only sometimes) to the > nearest. If the money type is meant to be serious then these > conventions need to be followed/settable on a column by column Possible. Generally I handle these issues in code because it is sometimes hard to nail down exact requirements that fit all. I also tend to use money only in situations where the exact dollars and cents is already known or is dealt with in code. > basis. And money is done in whole dollars, thousands of dollars, > and fractional cents according to the situation, i.e., not just > two decimal places... another setting. I would like to see the type handle other situations such as foreign (to me) currency, etc. I suppose a positional parameter and a currency string setting would handle most of those issues. Technically, the money type is a cents type. Everything is stored as the number of cents. Formatting it as dollars and cents is a convenience added by the I/O functions. > Personally I have ignored the money type in favour of numeric. Even as the author I sometimes go with numeric but there is a place for the type. If you are working with simple dollars and cents quantities and you need to do lots of calculations on them, the money type can be a great performance boost. The big win that money brings is that everything is stored as an int. That means that you don't need to convert data in the database to a machine representation before summing, averaging, etc. The machine can generally work on the data as it comes out of the DB. -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner. IM: darcy@Vex.Net, VOIP: sip:darcy@Vex.Net
On 03/28/2013 07:43 AM, Gavan Schneider wrote: > Personally I have ignored the money type in favour of numeric. Money > seemed to do too much behind the scenes for my taste, but, that's me > being lazy as well, I haven't spend much time trying to understand its > features. You're not the only one. In the financial industry, we can't even use the money type for a few reasons: 1. It's very common for values to have fractional amounts in the selected currency. Just look at gas stations... they could never use the Money type in the US thanks to the "3.989" pricing they commonly employ. 2. You can't use the Money type for non-local currencies. Our database may store transactions in several base currencies. Sure, the smart thing would be to save the exchange rate at the time of the transaction and store the local value and the rate, but then you'd have to reverse that calculation to get the original value, and without decimals, that conversion would be wrong in most cases. So we use numeric. It's the only thing with the guaranteed precision we need, and prettying up the display is easy to do client-side. We're extremely happy to see the recent improvements in numeric performance that seem to be coming in 9.3. :) -- Shaun Thomas OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604 312-676-8870 sthomas@optionshouse.com ______________________________________________ See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email
> On 27/3/13 at 9:12 AM, Steve Crawford wrote: >> Thoughts? Is this the "no surprises" way that money input should behave? I took a quick look at cash_in(), which is what's being complained of here (not really casting). There are several things that seem like they could possibly stand to be tightened up: 1. it allows parens to indicate "negative", eg (1234.56), but it's not very anal about insisting that a paren appear on the right iff there's one on the left, nor about the exact position of the trailing paren. 2. it allows thousands separators to the right of the decimal point, but only as long as it's still looking for digits, viz: regression=# select '123.4567'::money; money --------- $123.46 (1 row) regression=# select '123.45,67'::money; money --------- $123.46 (1 row) regression=# select '123.456,7'::money; ERROR: invalid input syntax for type money: "123.456,7" LINE 1: select '123.456,7'::money; ^ 3. it is not picky about where you put thousands separators to the left of the decimal point. Of these I think #2 is a flat-out bug: the digit collection loop should reject thousands seps once it's found a decimal point. #1 is a shortcut acknowledged in the code comments, and while it probably is worth fixing, I'm finding it hard to get really excited about it. I think the debatable point is whether we want to tighten up #3 and if so how much. We surely do not want to insist that thousands separators are required. I think that different locales may space them differently --- at least, cash_out treats the spacing as configurable --- but do we want to zreject, say, commas every three places if the locale specifies four? I think that might be a little too picky. On the whole I'm okay with treating them as noise as long as they're to the left of the decimal. Gavan Schneider <pg-gts@snkmail.com> writes: > I would defer to a CPA on the correct conventions for rounding. > However I have a vague notion there are circumstances when > rounding is always up, always down and (only sometimes) to the > nearest. If the money type is meant to be serious then these > conventions need to be followed/settable on a column by column > basis. And money is done in whole dollars, thousands of dollars, > and fractional cents according to the situation, i.e., not just > two decimal places... another setting. If you need that kind of control over rounding you need to be doing it in your calculation code. Expecting an I/O function to provide it is doomed to failure. regards, tom lane
On 29/03/13 02:28, D'Arcy J.M. Cain wrote:
I am (now) primarily a Java developer (in my bad past I have done FORTRAN, COBOL, & other languages ).On Thu, 28 Mar 2013 23:43:23 +1100 Gavan Schneider <pg-gts@snkmail.com> wrote:But it appears that the philosophy does not extend to the "money" type. ...As the original author of the money type I guess I should weigh in.select ',123,456,,7,8.1,0,9'::money; money ---------------- $12,345,678.11It certainly doesn't accept that by design. I just never thought about such input. If you put garbage in anything can happen including acceptance. If this is an issue I guess we need to look for such things and reject it. Just a SMOP.I would defer to a CPA on the correct conventions for rounding. However I have a vague notion there are circumstances when rounding is always up, always down and (only sometimes) to the nearest. If the money type is meant to be serious then these conventions need to be followed/settable on a column by columnPossible. Generally I handle these issues in code because it is sometimes hard to nail down exact requirements that fit all. I also tend to use money only in situations where the exact dollars and cents is already known or is dealt with in code.basis. And money is done in whole dollars, thousands of dollars, and fractional cents according to the situation, i.e., not just two decimal places... another setting.I would like to see the type handle other situations such as foreign (to me) currency, etc. I suppose a positional parameter and a currency string setting would handle most of those issues. Technically, the money type is a cents type. Everything is stored as the number of cents. Formatting it as dollars and cents is a convenience added by the I/O functions.Personally I have ignored the money type in favour of numeric.Even as the author I sometimes go with numeric but there is a place for the type. If you are working with simple dollars and cents quantities and you need to do lots of calculations on them, the money type can be a great performance boost. The big win that money brings is that everything is stored as an int. That means that you don't need to convert data in the database to a machine representation before summing, averaging, etc. The machine can generally work on the data as it comes out of the DB.
I use PostgreSQL in 2 situations:
- To create a 'concrete sketch' of a sub set of a data model to explore how to represent things
- To create an actual production database.
So I would like a money type that I can use in all appropriate situations.
Cheers,
Gavin
On 2013-03-28, D'Arcy J.M. Cain <darcy@druid.net> wrote: > > I would like to see the type handle other situations such as foreign > (to me) currency, etc. I suppose a positional parameter and a currency > string setting would handle most of those issues. Technically, the > money type is a cents type. Everything is stored as the number of > cents. Formatting it as dollars and cents is a convenience added by > the I/O functions. it actually does that, if you have the locale installed you can set LC_MONETARY to Japan and get no decimals and a Yen symbol or to UAE and get three decimals and their currency symbol. > Even as the author I sometimes go with numeric but there is a place for > the type. If you are working with simple dollars and cents quantities > and you need to do lots of calculations on them, the money type can be > a great performance boost. The big win that money brings is that > everything is stored as an int. That means that you don't need to > convert data in the database to a machine representation before > summing, averaging, etc. The machine can generally work on the data as > it comes out of the DB. it'd be nice to be able to do a money * numeric operation currently multiplication is done in floating point and this can give unexpected results. > > -- > D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves > http://www.druid.net/darcy/ | and a sheep voting on > +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner. > IM: darcy@Vex.Net, VOIP: sip:darcy@Vex.Net > > -- ⚂⚃ 100% natural
On 28 Mar 2013 20:50:42 GMT Jasen Betts <jasen@xnet.co.nz> wrote: > it actually does that, if you have the locale installed you can set > LC_MONETARY to Japan and get no decimals and a Yen symbol > or to UAE and get three decimals and their currency symbol. Must have been added by someone else after I worked on it. I thought about that issue but felt that that was the wrong solution. The problem is that the same data displays differently depending on who runs the query. I would have rather made that part of the column definition similar to how we create timestamps with or without timezones. If a column is tracking Yen it should always be Yen. Y10,000 should never display as $100.00 just because the locale changes. -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 788 2246 (DoD#0082) (eNTP) | what's for dinner. IM: darcy@Vex.Net, VOIP: sip:darcy@Vex.Net
On 3/28/2013 2:13 PM, D'Arcy J.M. Cain wrote: > I would have rather made that part of the column definition similar to > how we create timestamps with or without timezones. If a column is > tracking Yen it should always be Yen. Y10,000 should never display as > $100.00 just because the locale changes. or to another extreme, part of the data, such that different rows could have different monetary units. (eg, money is implemented as a pair (currency,amount). eeek, then you'd need exchange rate tables and such. hahahahaha, what a nightmare. -- john r pierce 37N 122W somewhere on the middle of the left coast
On 03/28/2013 04:36 PM, John R Pierce wrote: > or to another extreme, part of the data, such that different rows could > have different monetary units. (eg, money is implemented as a pair > (currency,amount). eeek, then you'd need exchange rate tables and > such. hahahahaha, what a nightmare. Naw. Only if you wanted to convert them. I personally wonder why it wasn't implemented this way to begin with. Like TSTZ, with a MONEY type, the currency is the TZ segment. XXX amount in YYY encoding, with the default being the currency of the locale if not otherwise specified. It would still be useless for calculations in applications requiring more significant figures, but would make more sense than the currently magically morphing value it is now. "Hey, we just shipped a DB server to Japan, and now all of the monetary values are wrong. WTF!" Yeah... no. :) -- Shaun Thomas OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604 312-676-8870 sthomas@optionshouse.com ______________________________________________ See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email
On 29/03/13 10:13, D'Arcy J.M. Cain wrote:
Eeeks!On 28 Mar 2013 20:50:42 GMT Jasen Betts <jasen@xnet.co.nz> wrote:it actually does that, if you have the locale installed you can set LC_MONETARY to Japan and get no decimals and a Yen symbol or to UAE and get three decimals and their currency symbol.Must have been added by someone else after I worked on it. I thought about that issue but felt that that was the wrong solution. The problem is that the same data displays differently depending on who runs the query. I would have rather made that part of the column definition similar to how we create timestamps with or without timezones. If a column is tracking Yen it should always be Yen. Y10,000 should never display as $100.00 just because the locale changes.
I agree...
Hmm... This should optionally apply to time. e.g. time_i_got_up_in_the_morning should reflect the time zone where I got up - if I got up at 8am NZ time then this should be displayed, not 12pm (12 noon) to someone in Los Angeles or 3am in Tokyo! (have a 'localtime' data type? - possibly add the timezone code if displayed in a different time zone.)
Cheers,
Gavin
On 2013-03-28, Gavin Flower <GavinFlower@archidevsys.co.nz> wrote: > > Hmm... This should optionally apply to time. e.g. > time_i_got_up_in_the_morning should reflect the time zone where I got up > - if I got up at 8am NZ time then this should be displayed, not 12pm (12 > noon) to someone in Los Angeles or 3am in Tokyo! (have a 'localtime' > data type?- possibly add the timezone code if displayed in a different > time zone.) it was 12 noon in LA when you got up. if you want the local time of the even you can specfy where you want it at time zone 'Pacific/Auckland' at time zone 'NZDT' -- note: some names are ambiguous eg: 'EST' or at time zone '-13:00' -- note: offsets are ISO, not POSIX getting the local time of the even This requires that you store the locale, zone name , or offset when you store the time. or you could just cast it to text when you store it... how confusing is 'EST' ? worse than this: set datestyle to 'sql,dmy'; set time zone 'Australia/Brisbane'; select '20130101T000000Z'::timestamptz; set time zone 'Australia/Sydney'; select '20130101T000000Z'::timestamptz; set time zone 'America/New_York'; select '20130101T000000Z'::timestamptz; -- ⚂⚃ 100% natural
On Fri, Mar 29, 2013 at 10:39 AM, Jasen Betts <jasen@xnet.co.nz> wrote: > how confusing is 'EST' ? > worse than this: > > set datestyle to 'sql,dmy'; > set time zone 'Australia/Brisbane'; > select '20130101T000000Z'::timestamptz; > set time zone 'Australia/Sydney'; > select '20130101T000000Z'::timestamptz; > set time zone 'America/New_York'; > select '20130101T000000Z'::timestamptz; As a Melburnian (that is, I live in Australia/Melbourne, which is in the same timezone as Sydney - but not Brisbane), I prefer to call it EAST - Eastern Australian Standard Time. Of course, for anything that really matters, I'll work with UTC. Much safer. ChrisA
Steve Crawford wrote > > select ',123,456,,7,8.1,0,9'::money; > money > ---------------- > $12,345,678.11 As an end-user it would seem since a comma (or whatever the locale defines as a group separator) carries no significant information - it is purely aesthetic - that ignoring all commas during input conversion is a reasonable behavior. The placement of the commas is mere convention and not formally encoded in the locale. While for money the point is usually moot a long sequence fractional numbers would benefit equally from having a group separator just as the long sequence of whole numbers does traditionally. Steve Crawford wrote > Somewhat more worrisome is the fact that it automatically rounds input > (away from zero) to fit. > > select '123.456789'::money; > money > --------- > $123.46 This too seems to be useful since, for instance, if you end up dividing a money by a unit - to get a per-unit value - the end result should be in the same currency and would have to be rounded to the maximum precision of the currency involved. Given that neither behavior is overtly wrong if during data entry you feel a more stringent input string is required you will need to use a regular expression to assert that constraint. David J. -- View this message in context: http://postgresql.1045698.n5.nabble.com/Money-casting-too-liberal-tp5749919p5749933.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
Some people wrote: >... Hmm... This should optionally apply to time. >... for anything that really matters, I'll work with UTC. > Is there a Godwin's law <http://en.wikipedia.org/wiki/Godwin's_law> equivalent for when our conversations end up with timezones getting mentioned? :) Regards Gavan Schneider
Some thoughts. The current MONEY type might be considered akin to ASCII. Perfect for a base US centric accounting system where there are cents and dollars and no need to carry smaller fractions. As discussed, there are some details that could be refined. When it comes to this type being used in full blown money systems it lacks the ability to carry fractions of cents and keep track of currencies. It also needs to play nicer with other exact types such as numeric, i.e., no intermediate calculations as real. Therefore the discussion is really about the desired role for the MONEY type. Should it be refined in its current dallar and cents mode? or, be promoted to a more universal role (akin to a shift from ASCII to UTF)? If there is merit in making MONEY work for most situations involving financial transactions I think the following might apply: - keep integer as the underlying base type (for performance) - generalise the decimal multiplier of a MONRY column so a specific MONEY column can be what its creator wants (from partial cents to millions of dollars/Yen/Other, along with rounding/truncating rules as required by r the user of his/her external agencies) - define the currency for a given column and only allow this to change in defined ways, and specifically forbid implicit changes such as would arise from altering LOCALE information - ensure the MONEY type plays nice with other exact precision types, i.e., convert to REAL/FLOAT as a very last resort Personally I don't think it is appropriate for the MONEY type to have variable characteristics (such as different currencies) within a given column, rather the column variable should define the currency along with the desired decimal-multiplier and whatever else is required. The actual values within the column remain as simple integers. This is mostly based on performance issues. If the MONRY type is to be used it has to offer real performance benefits over bespoke NUMERIC applications. Regards Gavan Schneider
Gavan Schneider <pg-gts@snkmail.com> writes: > Therefore the discussion is really about the desired role for > the MONEY type. Should it be refined in its current dallar and > cents mode? or, be promoted to a more universal role (akin to a > shift from ASCII to UTF)? Well, this has been discussed before, and the majority view every time has been that MONEY is a legacy thing that most people would rather rip out than sink a large amount of additional effort into. It has some use-cases but they are narrow, and it's not clear how much wider the use-cases would be if we tried to generalize it. My own experience with this sort of thing leads me to think that real applications dealing with a variety of currencies will be needing to store additional details, such as the exact exchange rate that applied to a particular transaction. So while merely decoupling MONEY from lc_monetary doesn't sound like a bad thing, it's not clear it really buys that much. regards, tom lane
On Fri, 29 Mar 2013 11:46:40 -0400 Tom Lane <tgl@sss.pgh.pa.us> wrote: > Well, this has been discussed before, and the majority view every > time has been that MONEY is a legacy thing that most people would > rather rip out than sink a large amount of additional effort into. > It has some use-cases but they are narrow, and it's not clear how > much wider the use-cases would be if we tried to generalize it. I wonder if our vision isn't a little tunneled here. Using this type for money is, perhaps, a specialized use and the type should really be called something else and modified to remove all connotations of money from it. So... - Drop the currency symbol - Allow number of decimals to be defined once for the column - Don't use locale except to specify decimal separator (',' vs. '.') - Allow operations against numeric Not sure what to rename it to. Decimal would be good if it wasn't already in use. Maybe DecimalInt. > My own experience with this sort of thing leads me to think that > real applications dealing with a variety of currencies will be > needing to store additional details, such as the exact exchange > rate that applied to a particular transaction. So while merely Seems like something that can be stored in a different column. -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 788 2246 (DoD#0082) (eNTP) | what's for dinner. IM: darcy@Vex.Net, VOIP: sip:darcy@Vex.Net
On 28 March 2013 13:52, Shaun Thomas <sthomas@optionshouse.com> wrote:
Speaking as a another finance/trading industry techie who works with various kinds of price data, I also find the money type interesting but useless. I am interested in scaled integers in general though, be they of fixed scale (per column, part of the type) or of floating scale (floating point decimal). I have run into those all over the place in software and protocols. They can be stored and computed more efficiently than the more general variable sized BCD string system where scale and precision are more like check constraints than limits of representation allowing for fixed size bitfields
For floating point decimal, IEEE 754 2008 decimal32, decimal64, decimal128 types would make interesting additions (the scale travels with each number.. it's essentially a bitfield of sign + exponent/scale + significand which is efficient for software implements, or an isomorphic BCD-like fixed size encoding which is used by IBM's POWER DFP hardware). But that can be implemented as custom types outside core PostgreSQL (I've done some initial experimentation with this, defining a type DECIMAL64, and not encountered any obstacles, using IBM decNumber, which is available under the liberal ICU license or the GPL license, and is used by many projects; there is also an Intel library with a BSD license IIRC).
On 03/28/2013 07:43 AM, Gavan Schneider wrote:You're not the only one. In the financial industry, we can't even use the money type for a few reasons:Personally I have ignored the money type in favour of numeric. Money
seemed to do too much behind the scenes for my taste, but, that's me
being lazy as well, I haven't spend much time trying to understand its
features.[... snip ... ]
Speaking as a another finance/trading industry techie who works with various kinds of price data, I also find the money type interesting but useless. I am interested in scaled integers in general though, be they of fixed scale (per column, part of the type) or of floating scale (floating point decimal). I have run into those all over the place in software and protocols. They can be stored and computed more efficiently than the more general variable sized BCD string system where scale and precision are more like check constraints than limits of representation allowing for fixed size bitfields
For floating point decimal, IEEE 754 2008 decimal32, decimal64, decimal128 types would make interesting additions (the scale travels with each number.. it's essentially a bitfield of sign + exponent/scale + significand which is efficient for software implements, or an isomorphic BCD-like fixed size encoding which is used by IBM's POWER DFP hardware). But that can be implemented as custom types outside core PostgreSQL (I've done some initial experimentation with this, defining a type DECIMAL64, and not encountered any obstacles, using IBM decNumber, which is available under the liberal ICU license or the GPL license, and is used by many projects; there is also an Intel library with a BSD license IIRC).
For fixed point decimal, a new scaled integer type with fixed scale and precision could be made that uses different representation depending on the parameters, much like the way Common LIsp implementations use fixnums based on word size while possible, and fall back to arbitrary sized systems if needed. That would of course be implementable outside core too.
Even the built-in NUMERIC could in theory use multiple encodings, whenever the scale and precision are provided, since it can work out whether they are within the limits that are implementable with different binary representations (in other words, when you ask for NUMERIC(*, 2), do what MONEY for US locales does, otherwise fall back to the more general case). But that would change the rules about when rewrites are required if you change scale/precision, so wouldn't be reasonable.
Even the built-in NUMERIC could in theory use multiple encodings, whenever the scale and precision are provided, since it can work out whether they are within the limits that are implementable with different binary representations (in other words, when you ask for NUMERIC(*, 2), do what MONEY for US locales does, otherwise fall back to the more general case). But that would change the rules about when rewrites are required if you change scale/precision, so wouldn't be reasonable.
On Thu, 2013-03-28 at 23:43 +1100, Gavan Schneider wrote: > If the money type is meant to be serious then these > conventions need to be followed/settable on a column by column > basis. I don't like the idea of tying the semantics to a column. That leaves out values that aren't stored in a column, e.g. literals or the results of some expression. Regards, Jeff Davis
On 3/27/13, Steve Crawford <scrawford@pinpointresearch.com> wrote: > Somewhat more worrisome is the fact that it automatically rounds input > (away from zero) to fit. > > select '123.456789'::money; > money > --------- > $123.46 So does casting to an integer: select 1.25::integer ; int4 ---- 1 And then there's this: create table wkdata (numval numeric(5,2)) CREATE TABLE Time: 6.761 ms nolan=> insert into wkdata nolan-> values (123.456789); INSERT 569625265 1 Time: 4.063 ms nolan=> select * from wkdata; select * from wkdata; numval ------ 123.46 So rounding a money field doesn't seem inconsistent with other data types. -- Mike Nolan
On Fri, 29 Mar 2013 12:02:49 -0700 Jeff Davis <pgsql@j-davis.com> wrote: > On Thu, 2013-03-28 at 23:43 +1100, Gavan Schneider wrote: > > If the money type is meant to be serious then these > > conventions need to be followed/settable on a column by column > > basis. > > I don't like the idea of tying the semantics to a column. That leaves > out values that aren't stored in a column, e.g. literals or the > results of some expression. OK, I hadn't though of that but now that I am.. How would this be an issue? If you are assigning a literal to a column then that's no issue. Otherwise, a literal is simply a value that can be cast depending on the situation. The money type is no different in that regard. As a result of an expression, it will have the type of the data in the expression. What if the result is the addition of two columns of different precisions? Pick the higher precision? Forbid the operation? The latter may make sense. How can you add Yen and US$? -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 788 2246 (DoD#0082) (eNTP) | what's for dinner. IM: darcy@Vex.Net, VOIP: sip:darcy@Vex.Net
On 29/3/13 at 3:32 AM, D'Arcy J.M. Cain wrote: >On Fri, 29 Mar 2013 11:46:40 -0400 Tom Lane wrote: >>Well, this has been discussed before, and the majority view every >>time has been that MONEY is a legacy thing that most people would >>rather rip out than sink a large amount of additional effort into. The only reason I have tried to explore these ideas is that the type is currently too quirky for most use cases. So I must agree that remove/ignore is the least work option. An argument for making the type more useful can be made by analogy to the geolocation add-in type. Most never go there but those who need to do so seem to prefer the builtin functionality over hand coding the same behaviour with columns of arrays that just happen to contain location data. >>It has some use-cases but they are narrow, and it's not clear how >>much wider the use-cases would be if we tried to generalize it. A well designed and specific tool can be worth the effort. The use cases include: Financial data, accounts in a single currency, i.e., the money column in a transaction Multi currency data, i.e., keeping track of transactions across several currencies. specifically we are NOT doing conversions, what arrives/leaves as $ or ¥ stays that way, this implies the dB has tables for each area of operation or columns for each currency One thing the type should not attempt or allow any implicit transforming of alues. Mostly a currency change is a transaction and whenever it happens it has to be recored as such, e.g., so many ¥ leave their column, appropriate $ are added to their column, and commission $/¥ is added to its column, also included will be: exchange rate reference time-stamp journal reference, etc. A constraint could be constructed to ensure the double entry book keeping zero sum convention has been maintained across the whole transaction. One time this might not be so detailed is for a VIEW where something akin to total worth is being reported. In cases like this the exchange rates would usually be in their table and the business rules would dictate which one is to be used to build the VIEW, e.g., end of month report, and it might be shown with all values in a single currency depending on the company's HQ. >I wonder if our vision isn't a little tunneled here. Using this type >for money is, perhaps, a specialized use and the type should really be >called something else and modified to remove all connotations of money >from it. So... > >- Drop the currency symbol >- Allow number of decimals to be defined once for the column >- Don't use locale except to specify decimal separator (',' vs. '.') > Mostly this is cosmetic and only relevant for parsing text on data entry or default formatting with SELECT on the command line. The power of the class is that none of this is in the data other than as dB column flags. The values themselves are integer. The class is meant to keep the books moving right along. >- Allow operations against numeric > Whatever else is done this should happen. >Not sure what to rename it to. Decimal would be good if it wasn't >already in use. Maybe DecimalInt. > I don't think there is much use for another fixed precision integral type. NUMERIC does a good job when INTEGER isn't suitable. If this exercise is worth anything then MONEY should just do its job better so people who track money (and there is an awful lot of them) will find it useful. >>My own experience with this sort of thing leads me to think that >>real applications dealing with a variety of currencies will be >>needing to store additional details, such as the exact exchange >>rate that applied to a particular transaction. So while merely > >Seems like something that can be stored in a different column. > Exactly. We to think this through as would a real user. If the business is receiving money from multiple regions then there will be rows which show the currency, number of units (numeric type since the column is not devoted to a specific currency), transaction tracing data, exchange reference (another table), amt_received::MONEY('USD','D2'), amt_transaction_fee::MONEY('USD','D3'), etc. Within the accounts of the organisation the MONEY columns are likely to be in a single currency with movements between ledgers in the time honoured fashion of adding to this while removing the same from other(s) so all money entries add to zero across the row. Movements between currencies are just another transaction as detailed above. I have sketched something of a notation for MONEY columns along these lines: amt_received MONEY ( CURRENCY -- e.g., 'USD' 'AUD' 'YEN' ... [,SCALE -- default as per currency, e.g. USD 2 decimals -- but could be used to see money in bigger units -- such as '000s (e.g., that end-of-month view) [,ROUND -- need to allow for multiple rules here, sometimes -- cents are just dropped, otherwise it can be -- required that rounding is up or down [,OTHER? ]]]) I have left the display characteristics out (they could be there as a default) but column values are going to be displayed however the application wants them, and this only applies at the time of reporting. Each currency can carry the conventional defaults and the application should have formatting tools to alter this during output. Inputting money values, i.e., text to MONEY should follow the conventions of the target currency. Specifically the input conversion routine should handle the symbol (or no symbol) and all the usual conventions for negative values, decimals and separators. It should throw an error if asked to add a value to a USD column but finds a yen symbol in the text. (There is no such help for all of us sharing the $ symbol. :) Also it should parse such things as 123.456,00 (Europe) and 123,456.00 (Anglo) properly. Errors need to be thrown when it looks wrong 123,456.789.00 -- since this is likely to be corrupted data, and finally gets me back to the issue raised by OP. :) Hope this hasn't been too much of a ramble. Regards, and happy (Western) Easter to all, Gavan Schneider
On Fri, 2013-03-29 at 16:30 -0400, D'Arcy J.M. Cain wrote: > How would this be an issue? If you are assigning a literal to a column > then that's no issue. Otherwise, a literal is simply a value that can > be cast depending on the situation. The money type is no different in > that regard. > > As a result of an expression, it will have the type of the data in the > expression. What if the result is the addition of two columns of > different precisions? Pick the higher precision? Forbid the > operation? The latter may make sense. How can you add Yen and US$? Why not have various rounding functions that do exactly what you want? Then you can use them anywhere you want in an expression. Tying a bunch of magic to the column, I/O function, or type system just seems like the wrong approach when it comes to real differences (like precision). Regards, Jeff Davis
On 30/3/13 at 9:30 AM, I wrote: >I have sketched something of a notation for MONEY columns along these lines: > >amt_received MONEY ( CURRENCY -- e.g., 'USD' 'AUD' 'YEN' ... >[,SCALE -- default as per currency, e.g. USD 2 decimals >-- but could be used to see money in bigger units >-- such as '000s (e.g., that end-of-month view) >[,ROUND -- need to allow for multiple rules here, sometimes >-- cents are just dropped, otherwise it can be >-- required that rounding is up or down >[,OTHER? >]]]) > Addition... No MONEY column would be complete without the ability to specify whether it is normally DEBIT or CREDIT (or in my preferred case NATURAL, i.e., no sign is pre-applied before any arithmetic between columns). This is possibly the best use case for the type since it really allows for the DB/CR (IMNSHO arcane) conventions to be properly handled within established industry traditions and has special benefits with externally provided data... values will enter the dB with sign conventions properly observed. Regards Gavan Schneider
On 29/03/13 23:32, Gavan Schneider wrote: > Some people wrote: > >> ... Hmm... This should optionally apply to time. >> ... for anything that really matters, I'll work with UTC. >> > Is there a Godwin's law <http://en.wikipedia.org/wiki/Godwin's_law> > equivalent for when our conversations end up with timezones getting > mentioned? :) > > Regards > Gavan Schneider > > > Interesting discussion. The comparisons with timezones ends when it comes to exchange rates. The rate at the time of transaction has to the stored (somewhere) associated with the base value. Timezones are rather fixed. I went with numeric over money a long time ago (many numerous discussions in #postgresql). As per the docs on NUMERIC: "It is especially recommended for storing monetary amounts and other quantities where exactness is required" "However, arithmetic on numeric values is very slow compared to the integer types" With a current WIP. I'm starting to think that numeric is probably overkill for storing monetary values as well (are we going to go much more than 6 decimal places? and thats just for storing the rates...). Now considering just using integers. All the formatting of the input and display for output is done in the front end, just don't make a mistake there. Boring story... An accountant came to visit us and pulled out his calculator and started doing some simple math, he did not use the decimal point once, he always knew where it was - although I would bet he would have trouble with division. Regards, Jules.
"Interesting discussion.
The comparisons with timezones ends when it comes to exchange rates.
The rate at the time of transaction has to the stored (somewhere)
associated with the base value. Timezones are rather fixed."
+1
No way can be solved just by type....
The comparisons with timezones ends when it comes to exchange rates.
The rate at the time of transaction has to the stored (somewhere)
associated with the base value. Timezones are rather fixed."
+1
On Saturday, March 30, 2013, Julian wrote:
On 29/03/13 23:32, Gavan Schneider wrote:
> Some people wrote:
>
>> ... Hmm... This should optionally apply to time.
>> ... for anything that really matters, I'll work with UTC.
>>
> Is there a Godwin's law <http://en.wikipedia.org/wiki/Godwin's_law>
> equivalent for when our conversations end up with timezones getting
> mentioned? :)
>
> Regards
> Gavan Schneider
>
>
>
Interesting discussion.
The comparisons with timezones ends when it comes to exchange rates.
The rate at the time of transaction has to the stored (somewhere)
associated with the base value. Timezones are rather fixed.
I went with numeric over money a long time ago (many numerous
discussions in #postgresql).
As per the docs on NUMERIC:
"It is especially recommended for storing monetary amounts and other
quantities where exactness is required"
"However, arithmetic on numeric values is very slow compared to the
integer types"
With a current WIP. I'm starting to think that numeric is probably
overkill for storing monetary values as well (are we going to go much
more than 6 decimal places? and thats just for storing the rates...).
Now considering just using integers. All the formatting of the input and
display for output is done in the front end, just don't make a mistake
there.
Boring story...
An accountant came to visit us and pulled out his calculator and started
doing some simple math, he did not use the decimal point once, he always
knew where it was - although I would bet he would have trouble with
division.
Regards,
Jules.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, 29 Mar 2013 23:32:20 +1100 Gavan Schneider <pg-gts@snkmail.com> wrote: > Is there a Godwin's law I am formulating Cain's Law. Something like "If a discussion lasts long enough, someone will mention Godwin's Law." :-) -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 788 2246 (DoD#0082) (eNTP) | what's for dinner. IM: darcy@Vex.Net, VOIP: sip:darcy@Vex.Net
On Sat, 30 Mar 2013 20:36:22 +1100 Julian <tempura@internode.on.net> wrote: > The comparisons with timezones ends when it comes to exchange rates. The only comparison I made with time and time zones was in how the column type can be refined when it is created. > With a current WIP. I'm starting to think that numeric is probably > overkill for storing monetary values as well (are we going to go much > more than 6 decimal places? and thats just for storing the rates...). > > Now considering just using integers. All the formatting of the input > and display for output is done in the front end, just don't make a > mistake there. Yes, that's exactly why the money type exists. -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 788 2246 (DoD#0082) (eNTP) | what's for dinner. IM: darcy@Vex.Net, VOIP: sip:darcy@Vex.Net
On Fri, 29 Mar 2013 17:23:50 -0700 Jeff Davis <pgsql@j-davis.com> wrote: > Why not have various rounding functions that do exactly what you want? > Then you can use them anywhere you want in an expression. Perhaps but the languages that we use all have the capability to manage this and we will probably never cover all the bases in PG. > Tying a bunch of magic to the column, I/O function, or type system > just seems like the wrong approach when it comes to real differences > (like precision). That's why I suggested that operations between money(2) and money(3) should raise an error. Treat them as distinct types. -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 788 2246 (DoD#0082) (eNTP) | what's for dinner. IM: darcy@Vex.Net, VOIP: sip:darcy@Vex.Net
On Sat, 30 Mar 2013 12:04:21 +1100 Gavan Schneider <pg-gts@snkmail.com> wrote: > No MONEY column would be complete without the ability to specify > whether it is normally DEBIT or CREDIT (or in my preferred case That seems extreme. What use case would there ever be for making a column always debit or always credit? I have a G/L system and most money columns either don't know about Dr/Cr or else there is another column with the G/L account which implies Dr/Cr. Where do you see a column that would be dedicated to one or the other? -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 788 2246 (DoD#0082) (eNTP) | what's for dinner. IM: darcy@Vex.Net, VOIP: sip:darcy@Vex.Net
On 2013-03-29, Gavan Schneider <pg-gts@snkmail.com> wrote: > Some thoughts. > > The current MONEY type might be considered akin to ASCII. > Perfect for a base US centric accounting system where there are > cents and dollars and no need to carry smaller fractions. As > discussed, there are some details that could be refined. > > When it comes to this type being used in full blown money > systems it lacks the ability to carry fractions of cents and > keep track of currencies. It also needs to play nicer with other > exact types such as numeric, i.e., no intermediate calculations > as real. > > Therefore the discussion is really about the desired role for > the MONEY type. Should it be refined in its current dallar and > cents mode? or, be promoted to a more universal role (akin to a > shift from ASCII to UTF)? > > If there is merit in making MONEY work for most situations > involving financial transactions I think the following might apply: > > - keep integer as the underlying base type (for performance) > - generalise the decimal multiplier of a MONRY column so a > specific MONEY column can be what its creator wants (from > partial cents to millions of dollars/Yen/Other, along with > rounding/truncating rules as required by r the user of his/her > external agencies) I think a more generic denominator should to be allowed, don't they trade in 64ths of a dollar on the stock market... > - define the currency for a given column and only allow this to > change in defined ways, and specifically forbid implicit changes > such as would arise from altering LOCALE information we've got that for collation on text columns, so that makes sense > - ensure the MONEY type plays nice with other exact precision > types, i.e., convert to REAL/FLOAT as a very last resort > Personally I don't think it is appropriate for the MONEY type to > have variable characteristics (such as different currencies) > within a given column, rather the column variable should define > the currency along with the desired decimal-multiplier and > whatever else is required. yeah, I can't see any case where this would be useful, doing that would likely give the accountants nightmartes. None of the agregate functions that normally can be applied to number columns coud be sensibly applied to such a column. > The actual values within the column > remain as simple integers. This is mostly based on performance > issues. If the MONRY type is to be used it has to offer real > performance benefits over bespoke NUMERIC applications. I'm currently using it to allow easy localization. performance isn't really an issue. maybe we should proposes a whole new type fixed_point(n) which is integer column with a denominator "n" defined on a per-column basis. -- ⚂⚃ 100% natural
Martijn van Oosterhout developed "tagged types" back in 2005, looks like it went nowhere. You can search for it, it was pretty interesting. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
unsubscribe
On Wed, Mar 27, 2013 at 3:12 PM, Steve Crawford <scrawford@pinpointresearch.com> wrote:
In contrast to certain other open-source databases, PostgreSQL leans toward protecting data from surprises and erroneous input, i.e. rejecting a date of 2013-02-31 instead of arbitrarily assigning a date of 2013-03-03. Similar "throw error" instead of "take a guess" philosophy applies to numeric and string operations as well. It's an approach I appreciate.
But it appears that the philosophy does not extend to the "money" type. Although there are certain checks including no alpha, '$' and '-', if present, must be in the first two characters of the string and commas can't be at the end. Otherwise the casting is fairly liberal. Commas, for instance, can appear nearly anywhere including after the decimal point:
select ',123,456,,7,8.1,0,9'::money;
money
----------------
$12,345,678.11
Somewhat more worrisome is the fact that it automatically rounds input (away from zero) to fit.
select '123.456789'::money;
money
---------
$123.46
select '$-123.456789'::money;
money
----------
-$123.46
Thoughts? Is this the "no surprises" way that money input should behave?
Cheers,
Steve
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 30/3/13 at 11:09 PM, D'Arcy J.M. Cain wrote: >I am formulating Cain's Law. Something like "If a discussion lasts >long enough, someone will mention Godwin's Law." +1 More formally: As an online discussion grows longer, the probability of Godwin's Law being mentioned approaches one. Regards Gavan Schneider
On 30/3/13 at 12:58 AM, D'Arcy J.M. Cain wrote: >On Sat, 30 Mar 2013 12:04:21 +1100 Gavan Schneider wrote: >>No MONEY column would be complete without the ability to >>specify whether it is normally DEBIT or CREDIT (or in my >>preferred case > >That seems extreme. What use case would there ever be for making a >column always debit or always credit? I have a G/L system and most >money columns either don't know about Dr/Cr or else there is another >column with the G/L account which implies Dr/Cr. Where do you see a >column that would be dedicated to one or the other? > If you have a credit card and a bank account you are already familiar with the concept of Debit and Credit columns. If the balance figures on your bank account are negative you have become overdrawn (i.e., it's a Credit column), if the balance of your credit card becomes negative it means you have paid them too much money (i.e., it's a Debit column). Notice how the sign is different when money is paid to the bank account (+ve) as opposed to the credit card (-ve). On the G/L system you probably have all the liabilities listed and when added up they come to a positive number. The assets should also add up to a positive number. Adding the two together in simple arithmetic terms should produce a nice big positive number which is not useful. The accounting convention is to negate all Debit values before adding them to Credit values, i.e., the result represents how much assets exceed liabilities. Obviously a negative number here means bad news for unsecured creditors if the company is in receivership. Most people don't notice this process since it is part of an accounting framework. Deep inside the application is a lookup table or application code or some other device that applies this Debit/Credit convention every time it's needed. My proposal is to make this part of the column characteristic so this logic is moved to the table design phase (and handled by the backend) rather than the application needing to keep track of which column values need to be negated and when. Basically if MONEY is to be a useful tool it should really handle money matters in a way that makes accountants happy. If it can't do that then nobody is going to bother using it for serious work since NUMERIC and INTEGER will do the job just as well without the surprises. Regards Gavan Schneider
On 31/03/13 21:57, Gavan Schneider wrote: > On 30/3/13 at 12:58 AM, D'Arcy J.M. Cain wrote: > Basically if MONEY is to be a useful tool it should really handle money > matters in a way that makes accountants happy. If it can't do that then > nobody is going to bother using it for serious work since NUMERIC and > INTEGER will do the job just as well without the surprises. > > Regards > Gavan Schneider Hi Gavan. I most certainly do not store my GL values like that. I'm sure its all about making the accountant "happy" but if he looked at any of my accounting databases I've worked on he would just be "confused" (hes way old school). He gets the "conventions" on printouts and a highlighter. :) Anyhow, money could perhaps inspire a generic integer based data type suitable for monetary values. A strict number literal for input and output. e.g: #SELECT '12.345'::decint2(3); decint2 ------------ 12.345 For example something like decint8(6) could still handle trillions with 6 decimal places and exchange rates could fit well within a decint4(x). I think that would be useful. Admittedly I'm just thinking on the fly here, so will appreciated people correcting me or commenting. Aside from that, it would appear that people are just happy with and recommend numeric for the time being. Just some thoughts. Regards. Jules.
unsubscribe
On Sun, Mar 31, 2013 at 3:31 AM, Gavan Schneider <pg-gts@snkmail.com> wrote:
On 30/3/13 at 11:09 PM, D'Arcy J.M. Cain wrote:+1I am formulating Cain's Law. Something like "If a discussion lasts
long enough, someone will mention Godwin's Law."
More formally:
As an online discussion grows longer, the probability of Godwin's Law
being mentioned approaches one.
Regards
Gavan Schneider
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 31/03/2013 17:45, ajmcello wrote: > unsubscribe Hi there, Instructions for unsubscribing are in the footer of every email sent from the list: > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org > <mailto:pgsql-general@postgresql.org>) > To make changes to your subscription: > http://www.postgresql.org/__mailpref/pgsql-general > <http://www.postgresql.org/mailpref/pgsql-general> HTH, Ray. -- Raymond O'Donnell :: Galway :: Ireland rod@iol.ie
Søndag 31. mars 2013 18.45.10 skrev ajmcello : > unsubscribe > > On Sun, Mar 31, 2013 at 3:31 AM, Gavan Schneider <pg-gts@snkmail.com> wrote: > > On 30/3/13 at 11:09 PM, D'Arcy J.M. Cain wrote: > > I am formulating Cain's Law. Something like "If a discussion lasts > > > >> long enough, someone will mention Godwin's Law." > > > > +1 > > > > More formally: > > As an online discussion grows longer, the probability of Godwin's Law > > being mentioned approaches one. First corollary to Cain's Law: As an online discussion grows longer, the probability of someone trying to unsubscribe by posting a reply to the discussion approaches one. regards, Leif
Hi Gavan, It is more about are problems described can be solved just by datatype at all... Just SUM values in GL transactions table would not make sense in any case to accountants - regardless will result be big number or 0 (what always will/should be in normal situations)... Maybe better would be to explain proposal better... What datatype should ensure from your point of view... GL transactions table can be designed on many (different) ways.. In this moment, to me is very hard to identify how "special" datatype could help in that case... P.S. to me is not first time to hear from people from finance field... "would be good if there is a special type what can handle... " - however always ends up that the rule depends on something stored somewhere (in another table or column) about datatype cant be aware of...(in my opinion) Kind Regards, Misa From: Gavan Schneider Sent: 31/03/2013 12:59 To: pgsql-general@postgresql.org Subject: Re: [GENERAL] Money casting too liberal? On 30/3/13 at 12:58 AM, D'Arcy J.M. Cain wrote: >On Sat, 30 Mar 2013 12:04:21 +1100 Gavan Schneider wrote: >>No MONEY column would be complete without the ability to >>specify whether it is normally DEBIT or CREDIT (or in my >>preferred case > >That seems extreme. What use case would there ever be for making a >column always debit or always credit? I have a G/L system and most >money columns either don't know about Dr/Cr or else there is another >column with the G/L account which implies Dr/Cr. Where do you see a >column that would be dedicated to one or the other? > If you have a credit card and a bank account you are already familiar with the concept of Debit and Credit columns. If the balance figures on your bank account are negative you have become overdrawn (i.e., it's a Credit column), if the balance of your credit card becomes negative it means you have paid them too much money (i.e., it's a Debit column). Notice how the sign is different when money is paid to the bank account (+ve) as opposed to the credit card (-ve). On the G/L system you probably have all the liabilities listed and when added up they come to a positive number. The assets should also add up to a positive number. Adding the two together in simple arithmetic terms should produce a nice big positive number which is not useful. The accounting convention is to negate all Debit values before adding them to Credit values, i.e., the result represents how much assets exceed liabilities. Obviously a negative number here means bad news for unsecured creditors if the company is in receivership. Most people don't notice this process since it is part of an accounting framework. Deep inside the application is a lookup table or application code or some other device that applies this Debit/Credit convention every time it's needed. My proposal is to make this part of the column characteristic so this logic is moved to the table design phase (and handled by the backend) rather than the application needing to keep track of which column values need to be negated and when. Basically if MONEY is to be a useful tool it should really handle money matters in a way that makes accountants happy. If it can't do that then nobody is going to bother using it for serious work since NUMERIC and INTEGER will do the job just as well without the surprises. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Hi Gavan, It is more about are problems described can be solved just by datatype at all... Just SUM values in GL transactions table would not make sense in any case to accountants - regardless will result be big number or 0 (what always will/should be in normal situations)... Maybe better would be to explain proposal better... What datatype should ensure from your point of view... GL transactions table can be designed on many (different) ways.. In this moment, to me is very hard to identify how "special" datatype could help in that case... P.S. to me is not first time to hear from people from finance field... "would be good if there is a special type what can handle... " - however always ends up that the rule depends on something stored somewhere (in another table or column) about datatype cant be aware of...(in my opinion) Kind Regards, Misa From: Gavan Schneider Sent: 31/03/2013 12:59 To: pgsql-general@postgresql.org Subject: Re: [GENERAL] Money casting too liberal? On 30/3/13 at 12:58 AM, D'Arcy J.M. Cain wrote: >On Sat, 30 Mar 2013 12:04:21 +1100 Gavan Schneider wrote: >>No MONEY column would be complete without the ability to >>specify whether it is normally DEBIT or CREDIT (or in my >>preferred case > >That seems extreme. What use case would there ever be for making a >column always debit or always credit? I have a G/L system and most >money columns either don't know about Dr/Cr or else there is another >column with the G/L account which implies Dr/Cr. Where do you see a >column that would be dedicated to one or the other? > If you have a credit card and a bank account you are already familiar with the concept of Debit and Credit columns. If the balance figures on your bank account are negative you have become overdrawn (i.e., it's a Credit column), if the balance of your credit card becomes negative it means you have paid them too much money (i.e., it's a Debit column). Notice how the sign is different when money is paid to the bank account (+ve) as opposed to the credit card (-ve). On the G/L system you probably have all the liabilities listed and when added up they come to a positive number. The assets should also add up to a positive number. Adding the two together in simple arithmetic terms should produce a nice big positive number which is not useful. The accounting convention is to negate all Debit values before adding them to Credit values, i.e., the result represents how much assets exceed liabilities. Obviously a negative number here means bad news for unsecured creditors if the company is in receivership. Most people don't notice this process since it is part of an accounting framework. Deep inside the application is a lookup table or application code or some other device that applies this Debit/Credit convention every time it's needed. My proposal is to make this part of the column characteristic so this logic is moved to the table design phase (and handled by the backend) rather than the application needing to keep track of which column values need to be negated and when. Basically if MONEY is to be a useful tool it should really handle money matters in a way that makes accountants happy. If it can't do that then nobody is going to bother using it for serious work since NUMERIC and INTEGER will do the job just as well without the surprises. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
On 31/3/13 at 5:20 AM, D'Arcy J.M. Cain wrote: >On Sun, 31 Mar 2013 21:57:49 +1100 Gavan Schneider wrote: >>On 30/3/13 at 12:58 AM, D'Arcy J.M. Cain wrote: >>>That seems extreme. What use case would there ever be for making a >>>column always debit or always credit? I have a G/L system and most >>>money columns either don't know about Dr/Cr or else there is another >>>column with the G/L account which implies Dr/Cr. Where do you see a >>>column that would be dedicated to one or the other? >>> >>If you have a credit card and a bank account you are already >>familiar with the concept of Debit and Credit columns. If the > >I am *very* familiar with debit and credit columns. In addition, I >don't confuse columns on a display or piece of paper with columns in a >database. > OK. My assumption was that my previous comment was not understood and I needed to plod through a basic example to make my meaning clear. While English is my first language I don't claim perfection in its usage. .... >>it can't do that then nobody is going to bother using it for >>serious work since NUMERIC and INTEGER will do the job just as >>well without the surprises. > >What surprises? It is much faster than numeric and it does the >formatting for you rather than requiring code like integer would. >Other than that they fail your test exactly like the money type. > Sorry. I know you authored the type. And mine are not the only comments along these lines. The MONEY type is in the system and any and all are welcome to use it as is. From the discussion it does not suite many and my only motive was to explore ways in which it could cover a wider audience without losing its advantages, i.e., speed and specificity. Regards Gavan Schneider
On Sat, 2013-03-30 at 09:52 -0400, D'Arcy J.M. Cain wrote: > That's why I suggested that operations between money(2) and money(3) > should raise an error. Treat them as distinct types. I don't think typmod is currently powerful enough to do that. It's lost in many different types of expressions. Offhand, I don't even know of a way to preserve the typmod through even a simple function. Regards, Jeff Davis
On 29/03/13 12:39, Jasen Betts wrote:
Sorry, I was at my Mum's for a few days with 'limited' Internet access - they have a much lower quota than I normally have...On 2013-03-28, Gavin Flower <GavinFlower@archidevsys.co.nz> wrote:Hmm... This should optionally apply to time. e.g. time_i_got_up_in_the_morning should reflect the time zone where I got up - if I got up at 8am NZ time then this should be displayed, not 12pm (12 noon) to someone in Los Angeles or 3am in Tokyo! (have a 'localtime' data type?- possibly add the timezone code if displayed in a different time zone.)it was 12 noon in LA when you got up. if you want the local time of the even you can specfy where you want it at time zone 'Pacific/Auckland'at time zone 'NZDT' -- note: some names are ambiguous eg: 'EST'orat time zone '-13:00' -- note: offsets are ISO, not POSIX getting the local time of the even This requires that you store the locale, zone name , or offset when you store the time. or you could just cast it to text when you store it... how confusing is 'EST' ? worse than this: set datestyle to 'sql,dmy'; set time zone 'Australia/Brisbane'; select '20130101T000000Z'::timestamptz; set time zone 'Australia/Sydney'; select '20130101T000000Z'::timestamptz; set time zone 'America/New_York'; select '20130101T000000Z'::timestamptz;
Yes I could store the timezone separately, but semantically it makes sense to store the local time & its time zone as a unit, less likely to have bugs when someone else (or myself in a years time) go to make amendments.
Storing in text is fine for display, but if I then have to also relate different local times to a common timeline, then text would not be so convenient.
To be honest this is current moot, as I don't have a need for this at the moment. Having said that, I realize I am tempting the gods into making so that I do need it!
Cheers,
Gavin
On 30/03/13 04:08, Gavan Schneider wrote:
Some thoughts.I agree 100%.
The current MONEY type might be considered akin to ASCII. Perfect for a base US centric accounting system where there are cents and dollars and no need to carry smaller fractions. As discussed, there are some details that could be refined.
When it comes to this type being used in full blown money systems it lacks the ability to carry fractions of cents and keep track of currencies. It also needs to play nicer with other exact types such as numeric, i.e., no intermediate calculations as real.
Therefore the discussion is really about the desired role for the MONEY type. Should it be refined in its current dallar and cents mode? or, be promoted to a more universal role (akin to a shift from ASCII to UTF)?
If there is merit in making MONEY work for most situations involving financial transactions I think the following might apply:
- keep integer as the underlying base type (for performance)
- generalise the decimal multiplier of a MONRY column so a specific MONEY column can be what its creator wants (from partial cents to millions of dollars/Yen/Other, along with rounding/truncating rules as required by r the user of his/her external agencies)
- define the currency for a given column and only allow this to change in defined ways, and specifically forbid implicit changes such as would arise from altering LOCALE information
- ensure the MONEY type plays nice with other exact precision types, i.e., convert to REAL/FLOAT as a very last resort
Personally I don't think it is appropriate for the MONEY type to have variable characteristics (such as different currencies) within a given column, rather the column variable should define the currency along with the desired decimal-multiplier and whatever else is required. The actual values within the column remain as simple integers. This is mostly based on performance issues. If the MONRY type is to be used it has to offer real performance benefits over bespoke NUMERIC applications.
Regards
Gavan Schneider
In the bad old days when I was a COBOL programmer we always stored money in the COBOL equivalent of an integer (numeric without a fractional part) to avoid round off, but we displayed with a decimal point to digits to the left. So storing as an integer (actually bigint would be required) is a good idea, with parameters to say how many effective digits in the fractional part, and how many fractional digits to display etc. - as you said.
Cheers,
Gavin
On 30/03/13 11:30, Gavan Schneider wrote: > On 29/3/13 at 3:32 AM, D'Arcy J.M. Cain wrote: > >> On Fri, 29 Mar 2013 11:46:40 -0400 Tom Lane wrote: >>> Well, this has been discussed before, and the majority view every >>> time has been that MONEY is a legacy thing that most people would >>> rather rip out than sink a large amount of additional effort into. > > The only reason I have tried to explore these ideas is that the type > is currently too quirky for most use cases. So I must agree that > remove/ignore is the least work option. An argument for making the > type more useful can be made by analogy to the geolocation add-in > type. Most never go there but those who need to do so seem to prefer > the builtin functionality over hand coding the same behaviour with > columns of arrays that just happen to contain location data. > >>> It has some use-cases but they are narrow, and it's not clear how >>> much wider the use-cases would be if we tried to generalize it. > > A well designed and specific tool can be worth the effort. > > The use cases include: > > Financial data, accounts in a single currency, i.e., the money > column in a transaction > > Multi currency data, i.e., keeping track of transactions across > several currencies. > specifically we are NOT doing conversions, what arrives/leaves > as $ or ¥ stays that way, > this implies the dB has tables for each area of operation or > columns for each currency > > One thing the type should not attempt or allow any implicit > transforming of alues. Mostly a currency change is a transaction and > whenever it happens it has to be recored as such, e.g., so many ¥ > leave their column, appropriate $ are added to their column, and > commission $/¥ is added to its column, also included will be: exchange > rate reference time-stamp journal reference, etc. A constraint could > be constructed to ensure the double entry book keeping zero sum > convention has been maintained across the whole transaction. > > One time this might not be so detailed is for a VIEW where something > akin to total worth is being reported. In cases like this the exchange > rates would usually be in their table and the business rules would > dictate which one is to be used to build the VIEW, e.g., end of month > report, and it might be shown with all values in a single currency > depending on the company's HQ. > > >> I wonder if our vision isn't a little tunneled here. Using this type >> for money is, perhaps, a specialized use and the type should really be >> called something else and modified to remove all connotations of money >> from it. So... >> > >> - Drop the currency symbol >> - Allow number of decimals to be defined once for the column >> - Don't use locale except to specify decimal separator (',' vs. '.') >> > Mostly this is cosmetic and only relevant for parsing text on data > entry or default formatting with SELECT on the command line. The power > of the class is that none of this is in the data other than as dB > column flags. The values themselves are integer. The class is meant to > keep the books moving right along. > >> - Allow operations against numeric >> > Whatever else is done this should happen. > >> Not sure what to rename it to. Decimal would be good if it wasn't >> already in use. Maybe DecimalInt. >> > I don't think there is much use for another fixed precision integral > type. NUMERIC does a good job when INTEGER isn't suitable. If this > exercise is worth anything then MONEY should just do its job better so > people who track money (and there is an awful lot of them) will find > it useful. > >>> My own experience with this sort of thing leads me to think that >>> real applications dealing with a variety of currencies will be >>> needing to store additional details, such as the exact exchange >>> rate that applied to a particular transaction. So while merely >> >> Seems like something that can be stored in a different column. >> > Exactly. We to think this through as would a real user. > > If the business is receiving money from multiple regions then there > will be rows which show the currency, number of units (numeric type > since the column is not devoted to a specific currency), transaction > tracing data, exchange reference (another table), > amt_received::MONEY('USD','D2'), > amt_transaction_fee::MONEY('USD','D3'), etc. > > Within the accounts of the organisation the MONEY columns are likely > to be in a single currency with movements between ledgers in the time > honoured fashion of adding to this while removing the same from > other(s) so all money entries add to zero across the row. Movements > between currencies are just another transaction as detailed above. > > I have sketched something of a notation for MONEY columns along these > lines: > > amt_received MONEY ( CURRENCY -- e.g., 'USD' 'AUD' 'YEN' ... > [,SCALE -- default as per currency, > e.g. USD 2 decimals > -- but could be used to see > money in bigger units > -- such as '000s (e.g., that > end-of-month view) > [,ROUND -- need to allow for multiple > rules here, sometimes > -- cents are just dropped, > otherwise it can be > -- required that rounding is > up or down > [,OTHER? > ]]]) > > I have left the display characteristics out (they could be there as a > default) but column values are going to be displayed however the > application wants them, and this only applies at the time of > reporting. Each currency can carry the conventional defaults and the > application should have formatting tools to alter this during output. > > Inputting money values, i.e., text to MONEY should follow the > conventions of the target currency. Specifically the input conversion > routine should handle the symbol (or no symbol) and all the usual > conventions for negative values, decimals and separators. It should > throw an error if asked to add a value to a USD column but finds a yen > symbol in the text. (There is no such help for all of us sharing the $ > symbol. :) Also it should parse such things as 123.456,00 (Europe) and > 123,456.00 (Anglo) properly. Errors need to be thrown when it looks > wrong 123,456.789.00 -- since this is likely to be corrupted data, and > finally gets me back to the issue raised by OP. :) > > Hope this hasn't been too much of a ramble. > > Regards, and happy (Western) Easter to all, > Gavan Schneider > > > While for my current project I don't need this, I've dealt with money many times, so it would be great to have a proper money type when I next needed to handle money in a database (which is bound to happen sooner or later!)! How about a picture clause for display - encodes details about how to handle negative numbers. QUESTION: How best to handle the difference between English and European conventions for thousand separators: English convention: NZ$1,000,005.34 European convention: NZ$1.000.005,34 note the use of '.' & ',' are swapped! QUESTION: Should the type of $ (NZ$ vs US$) be dropped if it matches the locale? Cheers, Gavin
On 30/03/13 08:36, Michael Nolan wrote:
In New Zealand at one point we rounded to the nearst 5 cents now to 10 cents, probably in a few years we will round to the nearest 20c or 50c... Not sure how people, if they ever did, coped with printing values before or after the change in the value to be rounded (say to the nearest 5c then the next day to the nearest 10c)!On 3/27/13, Steve Crawford <scrawford@pinpointresearch.com> wrote:Somewhat more worrisome is the fact that it automatically rounds input (away from zero) to fit. select '123.456789'::money; money --------- $123.46So does casting to an integer: select 1.25::integer ; int4 ---- 1 And then there's this: create table wkdata (numval numeric(5,2)) CREATE TABLE Time: 6.761 ms nolan=> insert into wkdata nolan-> values (123.456789); INSERT 569625265 1 Time: 4.063 ms nolan=> select * from wkdata; select * from wkdata; numval ------ 123.46 So rounding a money field doesn't seem inconsistent with other data types. -- Mike Nolan
There are many rounding modes, from Java (Enum RoundingMode):
CEILING: Rounding mode to round towards positive infinity.
DOWN: Rounding mode to round towards zero.
FLOOR: Rounding mode to round towards negative infinity.
HALF_DOWN: Rounding mode to round towards "nearest neighbor" unless both neighbors are equidistant, in which case round down.
HALF_EVEN: Rounding mode to round towards the "nearest neighbor" unless both neighbors are equidistant, in which case, round towards the even neighbor.
HALF_UP: Rounding mode to round towards "nearest neighbor" unless both neighbors are equidistant, in which case round up.
UNNECESSARY: Rounding mode to assert that the requested operation has an exact result, hence no rounding is necessary.
UP: Rounding mode to round away from zero.
On 4/2/2013 12:50 AM, Gavin Flower wrote:
In the bad old days when I was a COBOL programmer we always stored money in the COBOL equivalent of an integer (numeric without a fractional part) to avoid round off, but we displayed with a decimal point to digits to the left. So storing as an integer (actually bigint would be required) is a good idea, with parameters to say how many effective digits in the fractional part, and how many fractional digits to display etc. - as you said.
COBOL Numeric was BCD. same as NUMERIC in SQL (yes, I know postgresql internally uses a base 10000 notation for this, storing it as an array of short ints, but effectively its equivalent to BCD).
-- john r pierce 37N 122W somewhere on the middle of the left coast
On 03/04/13 07:16, John R Pierce wrote:
It was many years ago! :-)On 4/2/2013 12:50 AM, Gavin Flower wrote:In the bad old days when I was a COBOL programmer we always stored money in the COBOL equivalent of an integer (numeric without a fractional part) to avoid round off, but we displayed with a decimal point to digits to the left. So storing as an integer (actually bigint would be required) is a good idea, with parameters to say how many effective digits in the fractional part, and how many fractional digits to display etc. - as you said.
COBOL Numeric was BCD. same as NUMERIC in SQL (yes, I know postgresql internally uses a base 10000 notation for this, storing it as an array of short ints, but effectively its equivalent to BCD).-- john r pierce 37N 122W somewhere on the middle of the left coast