On Tue, Aug 21, 2012 at 11:27 PM, Wells Oliver <wellsoliver@gmail.com> wrote:
> We have a lot of tables which store numeric data. These tables all use the
> numeric type, where the values are 95% integer values. We used numeric
> because it eliminated the need for casting during division to yield a
> floating point value.
>
> I'm curious as to whether this would have performance and/or disk size
> implications. Would converting these columns to integer (or double precision
> on the handful that require the precision) and forcing developers to use
> explicit casting be worth the time?
>
> Thanks for any clarification.
Calculations against numeric are several orders of magnitude slower
than native binary operations. Fortunately the time the database
spends doing these types of calculations is often a tiny fraction of
overall execution time and I advise giving numeric a whirl unless you
measure a big performance hit. Let's put it this way: native binary
types are a performance hack that come with all kinds of weird baggage
that percolate up and uglify your code: your example given is a
classic case in point. Database "integer" types are not in fact
integers but a physically constrained approximation of them. Floating
point types are even worse.
Another example: I just found out for the first time (after many years
of programming professionally) that -2147483648 / -1 raises a hardware
exception: this is exactly the kind of thing that makes me think that
rote use of hardware integer types is a terribly bad practice.
merlin